Test Report: Docker_Linux_containerd_arm64 22094

                    
                      4d318e45b0dac190a241a23c5ddc63ef7c67bab3:2025-12-10:42711
                    
                

Test fail (35/417)

Order failed test Duration
29 TestDownloadOnlyKic 0.97
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 508.46
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 368.98
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 2.25
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 2.49
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 2.28
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 735.58
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 2.2
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 1.68
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 3.09
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 2.34
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 241.69
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 1.39
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 0.1
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 109.14
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 0.05
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.27
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.25
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.27
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 2.61
358 TestKubernetesUpgrade 793.84
413 TestStartStop/group/no-preload/serial/FirstStart 506.65
437 TestStartStop/group/newest-cni/serial/FirstStart 507.63
438 TestStartStop/group/no-preload/serial/DeployApp 2.99
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 110.09
442 TestStartStop/group/no-preload/serial/SecondStart 370.62
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 96.88
447 TestStartStop/group/newest-cni/serial/SecondStart 375.52
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.06
452 TestStartStop/group/newest-cni/serial/Pause 9.3
467 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 267.93
x
+
TestDownloadOnlyKic (0.97s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-870969 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:239: expected tarball file "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4" to exist, but got error: stat /home/jenkins/minikube-integration/22094-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4: no such file or directory
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-870969" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-870969
--- FAIL: TestDownloadOnlyKic (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (508.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644034 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1210 05:39:28.430390    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:41:44.575268    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:42:12.277052    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:37.013552    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:37.020472    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:37.032101    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:37.053507    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:37.094999    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:37.176508    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:37.337985    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:37.659727    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:38.301847    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:39.583456    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:42.146429    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:47.268915    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:43:57.510188    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:17.991828    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:44:58.955136    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:20.879947    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:46:44.571691    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-644034 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m26.979015809s)

                                                
                                                
-- stdout --
	* [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:46207
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:46207 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-644034 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-644034 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001179696s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001034011s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001034011s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-644034 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 6 (319.279535ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 05:47:53.363814   51659 status.go:458] kubeconfig endpoint: get endpoint: "functional-644034" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-944360 ssh sudo cat /usr/share/ca-certificates/41162.pem                                                                                             │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image load --daemon kicbase/echo-server:functional-944360 --alsologtostderr                                                                   │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh            │ functional-944360 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh            │ functional-944360 ssh sudo cat /etc/test/nested/copy/4116/hosts                                                                                                 │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image load --daemon kicbase/echo-server:functional-944360 --alsologtostderr                                                                   │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ update-context │ functional-944360 update-context --alsologtostderr -v=2                                                                                                         │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ update-context │ functional-944360 update-context --alsologtostderr -v=2                                                                                                         │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image save kicbase/echo-server:functional-944360 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ update-context │ functional-944360 update-context --alsologtostderr -v=2                                                                                                         │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image rm kicbase/echo-server:functional-944360 --alsologtostderr                                                                              │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image save --daemon kicbase/echo-server:functional-944360 --alsologtostderr                                                                   │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls --format short --alsologtostderr                                                                                                     │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls --format yaml --alsologtostderr                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh            │ functional-944360 ssh pgrep buildkitd                                                                                                                           │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ image          │ functional-944360 image ls --format json --alsologtostderr                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls --format table --alsologtostderr                                                                                                     │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image build -t localhost/my-image:functional-944360 testdata/build --alsologtostderr                                                          │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ delete         │ -p functional-944360                                                                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ start          │ -p functional-644034 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:39:26
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:39:26.089172   45604 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:39:26.089294   45604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:39:26.089298   45604 out.go:374] Setting ErrFile to fd 2...
	I1210 05:39:26.089302   45604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:39:26.089540   45604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:39:26.089929   45604 out.go:368] Setting JSON to false
	I1210 05:39:26.090694   45604 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1316,"bootTime":1765343850,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:39:26.090752   45604 start.go:143] virtualization:  
	I1210 05:39:26.096836   45604 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:39:26.100058   45604 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:39:26.100191   45604 notify.go:221] Checking for updates...
	I1210 05:39:26.106194   45604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:39:26.109303   45604 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:39:26.112286   45604 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:39:26.115174   45604 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:39:26.118313   45604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:39:26.121491   45604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:39:26.145596   45604 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:39:26.145713   45604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:39:26.208426   45604 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 05:39:26.199515704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:39:26.209398   45604 docker.go:319] overlay module found
	I1210 05:39:26.212513   45604 out.go:179] * Using the docker driver based on user configuration
	I1210 05:39:26.215359   45604 start.go:309] selected driver: docker
	I1210 05:39:26.215367   45604 start.go:927] validating driver "docker" against <nil>
	I1210 05:39:26.215378   45604 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:39:26.216079   45604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:39:26.282082   45604 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 05:39:26.272848425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:39:26.282220   45604 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:39:26.282435   45604 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:39:26.285275   45604 out.go:179] * Using Docker driver with root privileges
	I1210 05:39:26.288101   45604 cni.go:84] Creating CNI manager for ""
	I1210 05:39:26.288165   45604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:39:26.288171   45604 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 05:39:26.288246   45604 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:39:26.291396   45604 out.go:179] * Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	I1210 05:39:26.294271   45604 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 05:39:26.297088   45604 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:39:26.299982   45604 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:39:26.300071   45604 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:39:26.319169   45604 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:39:26.319180   45604 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 05:39:26.366242   45604 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 05:39:26.523420   45604 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 05:39:26.523705   45604 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:39:26.523845   45604 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json ...
	I1210 05:39:26.523871   45604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json: {Name:mk3bb53d1bf270cbe9496b64b85c2bf0e68a091f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:39:26.524051   45604 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:39:26.524090   45604 start.go:360] acquireMachinesLock for functional-644034: {Name:mk0dde6f976baac8ab90670ad27c806ab702c4c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:39:26.524135   45604 start.go:364] duration metric: took 36.153µs to acquireMachinesLock for "functional-644034"
	I1210 05:39:26.524151   45604 start.go:93] Provisioning new machine with config: &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 05:39:26.524221   45604 start.go:125] createHost starting for "" (driver="docker")
	I1210 05:39:26.529647   45604 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1210 05:39:26.529987   45604 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:46207 to docker env.
	I1210 05:39:26.530016   45604 start.go:159] libmachine.API.Create for "functional-644034" (driver="docker")
	I1210 05:39:26.530047   45604 client.go:173] LocalClient.Create starting
	I1210 05:39:26.530124   45604 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem
	I1210 05:39:26.530160   45604 main.go:143] libmachine: Decoding PEM data...
	I1210 05:39:26.530177   45604 main.go:143] libmachine: Parsing certificate...
	I1210 05:39:26.530244   45604 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem
	I1210 05:39:26.530269   45604 main.go:143] libmachine: Decoding PEM data...
	I1210 05:39:26.530282   45604 main.go:143] libmachine: Parsing certificate...
	I1210 05:39:26.530690   45604 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 05:39:26.561324   45604 cli_runner.go:211] docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 05:39:26.561405   45604 network_create.go:284] running [docker network inspect functional-644034] to gather additional debugging logs...
	I1210 05:39:26.561420   45604 cli_runner.go:164] Run: docker network inspect functional-644034
	W1210 05:39:26.577444   45604 cli_runner.go:211] docker network inspect functional-644034 returned with exit code 1
	I1210 05:39:26.577463   45604 network_create.go:287] error running [docker network inspect functional-644034]: docker network inspect functional-644034: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-644034 not found
	I1210 05:39:26.577477   45604 network_create.go:289] output of [docker network inspect functional-644034]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-644034 not found
	
	** /stderr **
	I1210 05:39:26.577587   45604 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:39:26.604238   45604 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a5650}
	I1210 05:39:26.604267   45604 network_create.go:124] attempt to create docker network functional-644034 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 05:39:26.604332   45604 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-644034 functional-644034
	I1210 05:39:26.665211   45604 network_create.go:108] docker network functional-644034 192.168.49.0/24 created
	I1210 05:39:26.665232   45604 kic.go:121] calculated static IP "192.168.49.2" for the "functional-644034" container
	I1210 05:39:26.665318   45604 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 05:39:26.680497   45604 cli_runner.go:164] Run: docker volume create functional-644034 --label name.minikube.sigs.k8s.io=functional-644034 --label created_by.minikube.sigs.k8s.io=true
	I1210 05:39:26.684162   45604 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:39:26.697879   45604 oci.go:103] Successfully created a docker volume functional-644034
	I1210 05:39:26.697964   45604 cli_runner.go:164] Run: docker run --rm --name functional-644034-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-644034 --entrypoint /usr/bin/test -v functional-644034:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 05:39:26.849420   45604 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:39:27.025704   45604 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:39:27.025794   45604 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:39:27.025801   45604 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 122.963µs
	I1210 05:39:27.025811   45604 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:39:27.025821   45604 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:39:27.025849   45604 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 05:39:27.025853   45604 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 33.617µs
	I1210 05:39:27.025858   45604 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 05:39:27.025865   45604 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:39:27.025891   45604 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 05:39:27.025895   45604 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 30.802µs
	I1210 05:39:27.025905   45604 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 05:39:27.025920   45604 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:39:27.025944   45604 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 05:39:27.025950   45604 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 30.343µs
	I1210 05:39:27.025954   45604 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 05:39:27.025961   45604 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:39:27.025985   45604 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 05:39:27.025988   45604 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 27.98µs
	I1210 05:39:27.025992   45604 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 05:39:27.026000   45604 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:39:27.026023   45604 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:39:27.026026   45604 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 27.2µs
	I1210 05:39:27.026030   45604 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:39:27.026037   45604 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:39:27.026060   45604 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 05:39:27.026063   45604 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 26.971µs
	I1210 05:39:27.026068   45604 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 05:39:27.026075   45604 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:39:27.026097   45604 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 05:39:27.026100   45604 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 26.404µs
	I1210 05:39:27.026105   45604 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 05:39:27.026110   45604 cache.go:87] Successfully saved all images to host disk.
	I1210 05:39:27.238233   45604 oci.go:107] Successfully prepared a docker volume functional-644034
	I1210 05:39:27.238309   45604 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	W1210 05:39:27.238456   45604 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 05:39:27.238560   45604 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 05:39:27.295319   45604 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-644034 --name functional-644034 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-644034 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-644034 --network functional-644034 --ip 192.168.49.2 --volume functional-644034:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 05:39:27.593179   45604 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Running}}
	I1210 05:39:27.615863   45604 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:39:27.634292   45604 cli_runner.go:164] Run: docker exec functional-644034 stat /var/lib/dpkg/alternatives/iptables
	I1210 05:39:27.682672   45604 oci.go:144] the created container "functional-644034" has a running status.
	I1210 05:39:27.682690   45604 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa...
	I1210 05:39:27.845308   45604 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 05:39:27.872635   45604 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:39:27.900829   45604 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 05:39:27.900841   45604 kic_runner.go:114] Args: [docker exec --privileged functional-644034 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 05:39:27.966595   45604 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:39:28.003417   45604 machine.go:94] provisionDockerMachine start ...
	I1210 05:39:28.003522   45604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:39:28.022450   45604 main.go:143] libmachine: Using SSH client type: native
	I1210 05:39:28.022781   45604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:39:28.022787   45604 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:39:28.023453   45604 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 05:39:31.174568   45604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:39:31.174582   45604 ubuntu.go:182] provisioning hostname "functional-644034"
	I1210 05:39:31.174643   45604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:39:31.192239   45604 main.go:143] libmachine: Using SSH client type: native
	I1210 05:39:31.192553   45604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:39:31.192562   45604 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-644034 && echo "functional-644034" | sudo tee /etc/hostname
	I1210 05:39:31.352493   45604 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:39:31.352575   45604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:39:31.370438   45604 main.go:143] libmachine: Using SSH client type: native
	I1210 05:39:31.370733   45604 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:39:31.370746   45604 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-644034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644034/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-644034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:39:31.519203   45604 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:39:31.519241   45604 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 05:39:31.519262   45604 ubuntu.go:190] setting up certificates
	I1210 05:39:31.519270   45604 provision.go:84] configureAuth start
	I1210 05:39:31.519327   45604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:39:31.536367   45604 provision.go:143] copyHostCerts
	I1210 05:39:31.536423   45604 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 05:39:31.536429   45604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:39:31.536508   45604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 05:39:31.536597   45604 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 05:39:31.536601   45604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:39:31.536626   45604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 05:39:31.536677   45604 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 05:39:31.536680   45604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:39:31.536701   45604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 05:39:31.536748   45604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.functional-644034 san=[127.0.0.1 192.168.49.2 functional-644034 localhost minikube]
	I1210 05:39:32.602289   45604 provision.go:177] copyRemoteCerts
	I1210 05:39:32.602345   45604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:39:32.602399   45604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:39:32.620088   45604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:39:32.722747   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:39:32.740402   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:39:32.758142   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:39:32.775121   45604 provision.go:87] duration metric: took 1.255829922s to configureAuth
	I1210 05:39:32.775138   45604 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:39:32.775322   45604 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:39:32.775327   45604 machine.go:97] duration metric: took 4.771899294s to provisionDockerMachine
	I1210 05:39:32.775332   45604 client.go:176] duration metric: took 6.245281158s to LocalClient.Create
	I1210 05:39:32.775344   45604 start.go:167] duration metric: took 6.245333869s to libmachine.API.Create "functional-644034"
	I1210 05:39:32.775350   45604 start.go:293] postStartSetup for "functional-644034" (driver="docker")
	I1210 05:39:32.775358   45604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:39:32.775409   45604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:39:32.775445   45604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:39:32.793921   45604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:39:32.898883   45604 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:39:32.902155   45604 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:39:32.902172   45604 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:39:32.902183   45604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 05:39:32.902236   45604 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 05:39:32.902331   45604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 05:39:32.902409   45604 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> hosts in /etc/test/nested/copy/4116
	I1210 05:39:32.902454   45604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4116
	I1210 05:39:32.909915   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:39:32.926713   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts --> /etc/test/nested/copy/4116/hosts (40 bytes)
	I1210 05:39:32.943442   45604 start.go:296] duration metric: took 168.079959ms for postStartSetup
	I1210 05:39:32.943790   45604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:39:32.960422   45604 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json ...
	I1210 05:39:32.960696   45604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:39:32.960751   45604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:39:32.977149   45604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:39:33.083942   45604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:39:33.088479   45604 start.go:128] duration metric: took 6.564244077s to createHost
	I1210 05:39:33.088494   45604 start.go:83] releasing machines lock for "functional-644034", held for 6.564352805s
	I1210 05:39:33.088562   45604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:39:33.110186   45604 out.go:179] * Found network options:
	I1210 05:39:33.113102   45604 out.go:179]   - HTTP_PROXY=localhost:46207
	W1210 05:39:33.116260   45604 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1210 05:39:33.121311   45604 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1210 05:39:33.124407   45604 ssh_runner.go:195] Run: cat /version.json
	I1210 05:39:33.124421   45604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:39:33.124459   45604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:39:33.124477   45604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:39:33.148307   45604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:39:33.149754   45604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:39:33.338662   45604 ssh_runner.go:195] Run: systemctl --version
	I1210 05:39:33.345336   45604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:39:33.350602   45604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:39:33.350689   45604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:39:33.376714   45604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 05:39:33.376727   45604 start.go:496] detecting cgroup driver to use...
	I1210 05:39:33.376770   45604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:39:33.376828   45604 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 05:39:33.391410   45604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:39:33.404259   45604 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:39:33.404315   45604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:39:33.422064   45604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:39:33.439912   45604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:39:33.555763   45604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:39:33.674450   45604 docker.go:234] disabling docker service ...
	I1210 05:39:33.674515   45604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:39:33.695744   45604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:39:33.709732   45604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:39:33.832362   45604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:39:33.958031   45604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:39:33.971301   45604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:39:33.985937   45604 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:39:34.136903   45604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:39:34.146537   45604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 05:39:34.155373   45604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:39:34.155431   45604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:39:34.163948   45604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:39:34.172698   45604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:39:34.181292   45604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:39:34.189856   45604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:39:34.197660   45604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:39:34.211460   45604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:39:34.220703   45604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:39:34.231158   45604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:39:34.239141   45604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:39:34.246919   45604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:39:34.353908   45604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:39:34.459398   45604 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 05:39:34.459457   45604 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 05:39:34.463778   45604 start.go:564] Will wait 60s for crictl version
	I1210 05:39:34.463831   45604 ssh_runner.go:195] Run: which crictl
	I1210 05:39:34.467712   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:39:34.491631   45604 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 05:39:34.491699   45604 ssh_runner.go:195] Run: containerd --version
	I1210 05:39:34.517017   45604 ssh_runner.go:195] Run: containerd --version
	I1210 05:39:34.543868   45604 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 05:39:34.546889   45604 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:39:34.563843   45604 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:39:34.567522   45604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:39:34.577253   45604 kubeadm.go:884] updating cluster {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:39:34.577427   45604 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:39:34.727030   45604 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:39:34.881856   45604 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:39:35.039979   45604 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:39:35.040063   45604 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:39:35.063725   45604 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 05:39:35.063738   45604 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 05:39:35.063775   45604 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:39:35.063977   45604 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:39:35.064054   45604 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:39:35.064116   45604 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:39:35.064192   45604 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:39:35.064264   45604 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 05:39:35.064327   45604 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 05:39:35.064392   45604 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:39:35.065826   45604 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:39:35.066176   45604 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:39:35.066368   45604 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:39:35.066491   45604 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:39:35.066692   45604 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 05:39:35.066816   45604 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:39:35.066920   45604 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 05:39:35.067755   45604 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:39:35.391271   45604 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1210 05:39:35.391332   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:39:35.407089   45604 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1210 05:39:35.407167   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:39:35.412765   45604 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1210 05:39:35.412797   45604 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:39:35.412852   45604 ssh_runner.go:195] Run: which crictl
	I1210 05:39:35.413557   45604 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 05:39:35.413619   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 05:39:35.418300   45604 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1210 05:39:35.418369   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:39:35.424784   45604 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1210 05:39:35.424856   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:39:35.428541   45604 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1210 05:39:35.428598   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:39:35.428872   45604 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1210 05:39:35.428907   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1210 05:39:35.440422   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:39:35.440478   45604 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1210 05:39:35.440535   45604 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:39:35.440561   45604 ssh_runner.go:195] Run: which crictl
	I1210 05:39:35.479842   45604 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 05:39:35.479872   45604 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 05:39:35.479927   45604 ssh_runner.go:195] Run: which crictl
	I1210 05:39:35.484519   45604 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1210 05:39:35.484558   45604 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:39:35.484605   45604 ssh_runner.go:195] Run: which crictl
	I1210 05:39:35.484655   45604 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 05:39:35.484669   45604 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:39:35.484707   45604 ssh_runner.go:195] Run: which crictl
	I1210 05:39:35.503705   45604 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1210 05:39:35.503737   45604 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 05:39:35.503791   45604 ssh_runner.go:195] Run: which crictl
	I1210 05:39:35.503856   45604 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1210 05:39:35.503867   45604 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:39:35.503885   45604 ssh_runner.go:195] Run: which crictl
	I1210 05:39:35.514134   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:39:35.514244   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:39:35.514297   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:39:35.514357   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:39:35.514412   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:39:35.514466   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:39:35.514514   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 05:39:35.625319   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 05:39:35.625400   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:39:35.625467   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:39:35.625557   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:39:35.625655   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:39:35.625722   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:39:35.625767   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:39:35.725390   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 05:39:35.725466   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:39:35.725497   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:39:35.725606   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 05:39:35.725643   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:39:35.725692   45604 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 05:39:35.725756   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:39:35.725847   45604 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 05:39:35.819231   45604 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1210 05:39:35.819336   45604 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 05:39:35.819408   45604 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 05:39:35.819458   45604 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 05:39:35.819509   45604 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 05:39:35.819546   45604 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 05:39:35.819583   45604 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 05:39:35.819622   45604 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 05:39:35.819654   45604 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 05:39:35.819689   45604 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 05:39:35.819739   45604 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 05:39:35.819773   45604 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 05:39:35.819185   45604 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 05:39:35.819812   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1210 05:39:35.834123   45604 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 05:39:35.834158   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1210 05:39:35.856217   45604 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 05:39:35.856246   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1210 05:39:35.856328   45604 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 05:39:35.856337   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 05:39:35.856384   45604 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 05:39:35.856392   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 05:39:35.856436   45604 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 05:39:35.856444   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1210 05:39:35.856573   45604 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 05:39:35.856581   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	W1210 05:39:35.869756   45604 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1210 05:39:35.869790   45604 retry.go:31] will retry after 298.797038ms: ssh: rejected: connect failed (open failed)
	W1210 05:39:35.869802   45604 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1210 05:39:35.869806   45604 retry.go:31] will retry after 133.924553ms: ssh: rejected: connect failed (open failed)
	W1210 05:39:35.869812   45604 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1210 05:39:35.869815   45604 retry.go:31] will retry after 221.711178ms: ssh: rejected: connect failed (open failed)
	I1210 05:39:36.003913   45604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:39:36.024619   45604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	W1210 05:39:36.364413   45604 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 05:39:36.366458   45604 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 05:39:36.366602   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:39:36.407378   45604 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 05:39:36.407456   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 05:39:37.941432   45604 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.533956198s)
	I1210 05:39:37.941448   45604 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 05:39:37.941463   45604 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 05:39:37.941507   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1210 05:39:37.941568   45604 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.574957534s)
	I1210 05:39:37.941584   45604 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 05:39:37.941606   45604 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:39:37.941628   45604 ssh_runner.go:195] Run: which crictl
	I1210 05:39:38.944627   45604 ssh_runner.go:235] Completed: which crictl: (1.002979758s)
	I1210 05:39:38.944690   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:39:38.944742   45604 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.003228292s)
	I1210 05:39:38.944751   45604 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 05:39:38.944766   45604 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 05:39:38.944786   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1210 05:39:40.265569   45604 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.320752836s)
	I1210 05:39:40.265586   45604 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 05:39:40.265603   45604 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 05:39:40.265616   45604 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.32091305s)
	I1210 05:39:40.265659   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 05:39:40.265666   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:39:40.290831   45604 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:39:41.193217   45604 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 05:39:41.193299   45604 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:39:41.193353   45604 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 05:39:41.193367   45604 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 05:39:41.193393   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1210 05:39:41.331785   45604 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 05:39:41.331807   45604 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 05:39:41.331884   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 05:39:41.331889   45604 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 05:39:41.331913   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 05:39:42.132750   45604 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 05:39:42.132783   45604 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 05:39:42.132855   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 05:39:43.018579   45604 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 05:39:43.018615   45604 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:39:43.018672   45604 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 05:39:43.398696   45604 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 05:39:43.398726   45604 cache_images.go:125] Successfully loaded all cached images
	I1210 05:39:43.398730   45604 cache_images.go:94] duration metric: took 8.334979909s to LoadCachedImages
	I1210 05:39:43.398741   45604 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1210 05:39:43.398834   45604 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:39:43.398927   45604 ssh_runner.go:195] Run: sudo crictl info
	I1210 05:39:43.422995   45604 cni.go:84] Creating CNI manager for ""
	I1210 05:39:43.423006   45604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:39:43.423044   45604 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:39:43.423065   45604 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644034 NodeName:functional-644034 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:39:43.423244   45604 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-644034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:39:43.423322   45604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:39:43.430935   45604 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 05:39:43.430990   45604 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:39:43.438734   45604 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1210 05:39:43.438838   45604 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 05:39:43.438920   45604 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256
	I1210 05:39:43.438956   45604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:39:43.439067   45604 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:39:43.439113   45604 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 05:39:43.443681   45604 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 05:39:43.443720   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1210 05:39:43.461586   45604 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 05:39:43.461614   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1210 05:39:43.461759   45604 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 05:39:43.470972   45604 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 05:39:43.471005   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1210 05:39:44.228851   45604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:39:44.238307   45604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 05:39:44.251814   45604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:39:44.265200   45604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 05:39:44.278941   45604 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:39:44.283088   45604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:39:44.293213   45604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:39:44.399338   45604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:39:44.417680   45604 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034 for IP: 192.168.49.2
	I1210 05:39:44.417691   45604 certs.go:195] generating shared ca certs ...
	I1210 05:39:44.417705   45604 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:39:44.417851   45604 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 05:39:44.417897   45604 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 05:39:44.417903   45604 certs.go:257] generating profile certs ...
	I1210 05:39:44.417955   45604 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key
	I1210 05:39:44.417964   45604 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt with IP's: []
	I1210 05:39:44.524771   45604 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt ...
	I1210 05:39:44.524788   45604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: {Name:mkd16c28a8556c4869a503a80ebad4e5a5d129b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:39:44.524989   45604 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key ...
	I1210 05:39:44.524996   45604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key: {Name:mkf7a01f9823bd7a73f59d49610934ca14182f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:39:44.525084   45604 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c
	I1210 05:39:44.525098   45604 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt.40bc062c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 05:39:44.832112   45604 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt.40bc062c ...
	I1210 05:39:44.832125   45604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt.40bc062c: {Name:mk49f699bdcb144fb520ccf480e0bee9498fa454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:39:44.832320   45604 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c ...
	I1210 05:39:44.832328   45604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c: {Name:mka2dee7cbe0a8a0724c9afadb886fc4a5ff9695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:39:44.832408   45604 certs.go:382] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt.40bc062c -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt
	I1210 05:39:44.832515   45604 certs.go:386] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key
	I1210 05:39:44.832569   45604 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key
	I1210 05:39:44.832585   45604 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt with IP's: []
	I1210 05:39:45.056718   45604 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt ...
	I1210 05:39:45.056737   45604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt: {Name:mkfa70dc957571ca708f44643d9142bf57f02aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:39:45.056974   45604 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key ...
	I1210 05:39:45.056984   45604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key: {Name:mk4ea5957c136eddfd49ada7051b725d799a497d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:39:45.057285   45604 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 05:39:45.057342   45604 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 05:39:45.057351   45604 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:39:45.057388   45604 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:39:45.057416   45604 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:39:45.057477   45604 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 05:39:45.057531   45604 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:39:45.058230   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:39:45.092891   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:39:45.116800   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:39:45.147602   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:39:45.182477   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:39:45.210871   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:39:45.241051   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:39:45.266996   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:39:45.289812   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:39:45.308879   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 05:39:45.329840   45604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 05:39:45.352415   45604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:39:45.366907   45604 ssh_runner.go:195] Run: openssl version
	I1210 05:39:45.374429   45604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:39:45.382204   45604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:39:45.390129   45604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:39:45.394279   45604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:39:45.394333   45604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:39:45.435630   45604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:39:45.443704   45604 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 05:39:45.452850   45604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 05:39:45.460677   45604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 05:39:45.468384   45604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 05:39:45.472385   45604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:39:45.472448   45604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 05:39:45.513288   45604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:39:45.520773   45604 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0
	I1210 05:39:45.528261   45604 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 05:39:45.535823   45604 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 05:39:45.543567   45604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 05:39:45.547637   45604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:39:45.547690   45604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 05:39:45.588611   45604 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:39:45.596478   45604 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0
	I1210 05:39:45.604093   45604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:39:45.607842   45604 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 05:39:45.607884   45604 kubeadm.go:401] StartCluster: {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:39:45.607973   45604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 05:39:45.608028   45604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:39:45.633433   45604 cri.go:89] found id: ""
	I1210 05:39:45.633495   45604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:39:45.641667   45604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:39:45.649650   45604 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:39:45.649701   45604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:39:45.658011   45604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:39:45.658023   45604 kubeadm.go:158] found existing configuration files:
	
	I1210 05:39:45.658091   45604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:39:45.666430   45604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:39:45.666484   45604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:39:45.674511   45604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:39:45.682473   45604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:39:45.682536   45604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:39:45.690549   45604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:39:45.698880   45604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:39:45.698939   45604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:39:45.706519   45604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:39:45.714429   45604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:39:45.714493   45604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:39:45.722008   45604 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:39:45.762007   45604 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 05:39:45.762152   45604 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:39:45.825425   45604 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:39:45.825488   45604 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 05:39:45.825522   45604 kubeadm.go:319] OS: Linux
	I1210 05:39:45.825566   45604 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:39:45.825613   45604 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 05:39:45.825659   45604 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:39:45.825706   45604 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:39:45.825752   45604 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:39:45.825802   45604 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:39:45.825846   45604 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:39:45.825892   45604 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:39:45.825937   45604 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 05:39:45.906099   45604 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:39:45.906202   45604 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:39:45.906291   45604 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:39:45.915525   45604 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:39:45.924456   45604 out.go:252]   - Generating certificates and keys ...
	I1210 05:39:45.924549   45604 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:39:45.924614   45604 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:39:46.067239   45604 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 05:39:46.142904   45604 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 05:39:46.507862   45604 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 05:39:47.059800   45604 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 05:39:47.617805   45604 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 05:39:47.618112   45604 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-644034 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:39:47.959786   45604 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 05:39:47.960063   45604 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-644034 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:39:48.049143   45604 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 05:39:48.225187   45604 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 05:39:48.881732   45604 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 05:39:48.881951   45604 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:39:49.192040   45604 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:39:49.363354   45604 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:39:49.433227   45604 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:39:49.863154   45604 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:39:50.142479   45604 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:39:50.143365   45604 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:39:50.147620   45604 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:39:50.198860   45604 out.go:252]   - Booting up control plane ...
	I1210 05:39:50.198974   45604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:39:50.199089   45604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:39:50.199159   45604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:39:50.199262   45604 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:39:50.199355   45604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:39:50.199461   45604 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:39:50.199544   45604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:39:50.199582   45604 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:39:50.329179   45604 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:39:50.329297   45604 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:43:50.330302   45604 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001179696s
	I1210 05:43:50.330330   45604 kubeadm.go:319] 
	I1210 05:43:50.330436   45604 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 05:43:50.330494   45604 kubeadm.go:319] 	- The kubelet is not running
	I1210 05:43:50.330831   45604 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 05:43:50.330840   45604 kubeadm.go:319] 
	I1210 05:43:50.331075   45604 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 05:43:50.331376   45604 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 05:43:50.331432   45604 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 05:43:50.331436   45604 kubeadm.go:319] 
	I1210 05:43:50.335553   45604 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 05:43:50.335973   45604 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 05:43:50.336083   45604 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:43:50.336330   45604 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 05:43:50.336335   45604 kubeadm.go:319] 
	I1210 05:43:50.336403   45604 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 05:43:50.336577   45604 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-644034 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-644034 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001179696s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 05:43:50.336673   45604 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 05:43:50.745103   45604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:43:50.758877   45604 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:43:50.758937   45604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:43:50.767130   45604 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:43:50.767138   45604 kubeadm.go:158] found existing configuration files:
	
	I1210 05:43:50.767201   45604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:43:50.775141   45604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:43:50.775197   45604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:43:50.782695   45604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:43:50.790800   45604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:43:50.790859   45604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:43:50.798987   45604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:43:50.807248   45604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:43:50.807308   45604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:43:50.815282   45604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:43:50.823741   45604 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:43:50.823799   45604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:43:50.832197   45604 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:43:50.873575   45604 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 05:43:50.873667   45604 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:43:50.951699   45604 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:43:50.951809   45604 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 05:43:50.951858   45604 kubeadm.go:319] OS: Linux
	I1210 05:43:50.951904   45604 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:43:50.952011   45604 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 05:43:50.952067   45604 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:43:50.952117   45604 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:43:50.952172   45604 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:43:50.952219   45604 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:43:50.952278   45604 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:43:50.952342   45604 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:43:50.952387   45604 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 05:43:51.020400   45604 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:43:51.020530   45604 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:43:51.020625   45604 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:43:51.027521   45604 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:43:51.031038   45604 out.go:252]   - Generating certificates and keys ...
	I1210 05:43:51.031116   45604 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:43:51.031178   45604 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:43:51.031253   45604 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 05:43:51.031314   45604 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 05:43:51.031383   45604 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 05:43:51.031435   45604 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 05:43:51.031497   45604 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 05:43:51.031557   45604 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 05:43:51.031630   45604 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 05:43:51.031702   45604 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 05:43:51.031973   45604 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 05:43:51.032040   45604 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:43:51.226985   45604 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:43:51.403772   45604 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:43:51.751729   45604 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:43:52.045162   45604 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:43:52.420958   45604 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:43:52.421533   45604 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:43:52.424196   45604 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:43:52.427502   45604 out.go:252]   - Booting up control plane ...
	I1210 05:43:52.427601   45604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:43:52.427682   45604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:43:52.427771   45604 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:43:52.449461   45604 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:43:52.449714   45604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:43:52.457916   45604 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:43:52.458009   45604 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:43:52.458047   45604 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:43:52.588643   45604 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:43:52.588798   45604 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:47:52.589529   45604 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001034011s
	I1210 05:47:52.589553   45604 kubeadm.go:319] 
	I1210 05:47:52.589655   45604 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 05:47:52.589870   45604 kubeadm.go:319] 	- The kubelet is not running
	I1210 05:47:52.590059   45604 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 05:47:52.590068   45604 kubeadm.go:319] 
	I1210 05:47:52.590255   45604 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 05:47:52.590555   45604 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 05:47:52.590610   45604 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 05:47:52.590614   45604 kubeadm.go:319] 
	I1210 05:47:52.595143   45604 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 05:47:52.595574   45604 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 05:47:52.595681   45604 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:47:52.595916   45604 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 05:47:52.595920   45604 kubeadm.go:319] 
	I1210 05:47:52.595986   45604 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 05:47:52.596049   45604 kubeadm.go:403] duration metric: took 8m6.988169592s to StartCluster
	I1210 05:47:52.596080   45604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:47:52.596146   45604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:47:52.620260   45604 cri.go:89] found id: ""
	I1210 05:47:52.620285   45604 logs.go:282] 0 containers: []
	W1210 05:47:52.620292   45604 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:47:52.620297   45604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:47:52.620354   45604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:47:52.652865   45604 cri.go:89] found id: ""
	I1210 05:47:52.652878   45604 logs.go:282] 0 containers: []
	W1210 05:47:52.652896   45604 logs.go:284] No container was found matching "etcd"
	I1210 05:47:52.652902   45604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:47:52.652960   45604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:47:52.677429   45604 cri.go:89] found id: ""
	I1210 05:47:52.677449   45604 logs.go:282] 0 containers: []
	W1210 05:47:52.677457   45604 logs.go:284] No container was found matching "coredns"
	I1210 05:47:52.677462   45604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:47:52.677526   45604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:47:52.701945   45604 cri.go:89] found id: ""
	I1210 05:47:52.701959   45604 logs.go:282] 0 containers: []
	W1210 05:47:52.701966   45604 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:47:52.701971   45604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:47:52.702027   45604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:47:52.726757   45604 cri.go:89] found id: ""
	I1210 05:47:52.726770   45604 logs.go:282] 0 containers: []
	W1210 05:47:52.726777   45604 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:47:52.726782   45604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:47:52.726882   45604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:47:52.752502   45604 cri.go:89] found id: ""
	I1210 05:47:52.752516   45604 logs.go:282] 0 containers: []
	W1210 05:47:52.752522   45604 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:47:52.752528   45604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:47:52.752586   45604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:47:52.776147   45604 cri.go:89] found id: ""
	I1210 05:47:52.776161   45604 logs.go:282] 0 containers: []
	W1210 05:47:52.776168   45604 logs.go:284] No container was found matching "kindnet"
	I1210 05:47:52.776177   45604 logs.go:123] Gathering logs for containerd ...
	I1210 05:47:52.776187   45604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:47:52.817832   45604 logs.go:123] Gathering logs for container status ...
	I1210 05:47:52.817849   45604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:47:52.850443   45604 logs.go:123] Gathering logs for kubelet ...
	I1210 05:47:52.850459   45604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:47:52.909549   45604 logs.go:123] Gathering logs for dmesg ...
	I1210 05:47:52.909568   45604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:47:52.921110   45604 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:47:52.921133   45604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:47:52.989899   45604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:47:52.980993    5409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:52.981733    5409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:52.983553    5409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:52.984219    5409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:52.985178    5409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:47:52.980993    5409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:52.981733    5409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:52.983553    5409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:52.984219    5409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:52.985178    5409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 05:47:52.989913   45604 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001034011s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 05:47:52.989955   45604 out.go:285] * 
	W1210 05:47:52.990052   45604 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001034011s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 05:47:52.990097   45604 out.go:285] * 
	W1210 05:47:52.992448   45604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:47:53.007869   45604 out.go:203] 
	W1210 05:47:53.010813   45604 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001034011s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 05:47:53.010870   45604 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 05:47:53.010890   45604 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 05:47:53.014111   45604 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 05:39:38 functional-644034 containerd[760]: time="2025-12-10T05:39:38.944950721Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:39:40 functional-644034 containerd[760]: time="2025-12-10T05:39:40.255617089Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 10 05:39:40 functional-644034 containerd[760]: time="2025-12-10T05:39:40.258058847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 10 05:39:40 functional-644034 containerd[760]: time="2025-12-10T05:39:40.273714046Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:39:40 functional-644034 containerd[760]: time="2025-12-10T05:39:40.274797624Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:39:41 functional-644034 containerd[760]: time="2025-12-10T05:39:41.183248797Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 10 05:39:41 functional-644034 containerd[760]: time="2025-12-10T05:39:41.185449019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 10 05:39:41 functional-644034 containerd[760]: time="2025-12-10T05:39:41.192957096Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:39:41 functional-644034 containerd[760]: time="2025-12-10T05:39:41.193570860Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:39:41 functional-644034 containerd[760]: time="2025-12-10T05:39:41.323704466Z" level=info msg="No images store for sha256:93c8ef8189dfce1093586cb6e184216b3f44fae01ceeb87b927be8638e1b7922"
	Dec 10 05:39:41 functional-644034 containerd[760]: time="2025-12-10T05:39:41.326141399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\""
	Dec 10 05:39:41 functional-644034 containerd[760]: time="2025-12-10T05:39:41.338695701Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 10 05:39:41 functional-644034 containerd[760]: time="2025-12-10T05:39:41.339038054Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 10 05:39:42 functional-644034 containerd[760]: time="2025-12-10T05:39:42.121319585Z" level=info msg="No images store for sha256:d508d421ac9749bf14983fef501ea5485d3d398a6ca3f4db9ba97269e261f5f9"
	Dec 10 05:39:42 functional-644034 containerd[760]: time="2025-12-10T05:39:42.123676772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\""
	Dec 10 05:39:42 functional-644034 containerd[760]: time="2025-12-10T05:39:42.133554926Z" level=info msg="ImageCreate event name:\"sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:39:42 functional-644034 containerd[760]: time="2025-12-10T05:39:42.134436524Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:39:43 functional-644034 containerd[760]: time="2025-12-10T05:39:43.009922994Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 10 05:39:43 functional-644034 containerd[760]: time="2025-12-10T05:39:43.012609676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 10 05:39:43 functional-644034 containerd[760]: time="2025-12-10T05:39:43.022300719Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:39:43 functional-644034 containerd[760]: time="2025-12-10T05:39:43.023229440Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:39:43 functional-644034 containerd[760]: time="2025-12-10T05:39:43.390543369Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 10 05:39:43 functional-644034 containerd[760]: time="2025-12-10T05:39:43.392682150Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 10 05:39:43 functional-644034 containerd[760]: time="2025-12-10T05:39:43.400567551Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:39:43 functional-644034 containerd[760]: time="2025-12-10T05:39:43.401052420Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:47:54.004531    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:54.005168    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:54.006965    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:54.007468    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:47:54.009066    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 05:47:54 up 30 min,  0 user,  load average: 0.07, 0.48, 0.72
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 05:47:50 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:47:51 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 10 05:47:51 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:47:51 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:47:51 functional-644034 kubelet[5322]: E1210 05:47:51.701457    5322 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:47:51 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:47:51 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:47:52 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 05:47:52 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:47:52 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:47:52 functional-644034 kubelet[5327]: E1210 05:47:52.458595    5327 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:47:52 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:47:52 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:47:53 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 05:47:53 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:47:53 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:47:53 functional-644034 kubelet[5414]: E1210 05:47:53.216255    5414 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:47:53 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:47:53 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:47:53 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 05:47:53 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:47:53 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:47:53 functional-644034 kubelet[5511]: E1210 05:47:53.967569    5511 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:47:53 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:47:53 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 6 (352.575522ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 05:47:54.495954   51881 status.go:458] kubeconfig endpoint: get endpoint: "functional-644034" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (508.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (368.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1210 05:47:54.510358    4116 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644034 --alsologtostderr -v=8
E1210 05:48:37.012099    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:49:04.721891    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:44.571170    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:07.639164    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:37.012691    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-644034 --alsologtostderr -v=8: exit status 80 (6m5.740796157s)

                                                
                                                
-- stdout --
	* [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:47:54.556574   51953 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:47:54.556774   51953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:54.556804   51953 out.go:374] Setting ErrFile to fd 2...
	I1210 05:47:54.556824   51953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:54.557680   51953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:47:54.558123   51953 out.go:368] Setting JSON to false
	I1210 05:47:54.558985   51953 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1825,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:47:54.559094   51953 start.go:143] virtualization:  
	I1210 05:47:54.562634   51953 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:47:54.566518   51953 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:47:54.566592   51953 notify.go:221] Checking for updates...
	I1210 05:47:54.572379   51953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:47:54.575335   51953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:54.578363   51953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:47:54.581210   51953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:47:54.584186   51953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:47:54.587618   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:54.587759   51953 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:47:54.618368   51953 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:47:54.618493   51953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:47:54.683662   51953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 05:47:54.67215006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:47:54.683767   51953 docker.go:319] overlay module found
	I1210 05:47:54.686996   51953 out.go:179] * Using the docker driver based on existing profile
	I1210 05:47:54.689865   51953 start.go:309] selected driver: docker
	I1210 05:47:54.689883   51953 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:54.689998   51953 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:47:54.690096   51953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:47:54.769093   51953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 05:47:54.760185758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:47:54.769542   51953 cni.go:84] Creating CNI manager for ""
	I1210 05:47:54.769597   51953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:47:54.769652   51953 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:54.772754   51953 out.go:179] * Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	I1210 05:47:54.775504   51953 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 05:47:54.778330   51953 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:47:54.781109   51953 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:47:54.781186   51953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:47:54.800171   51953 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:47:54.800192   51953 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 05:47:54.839003   51953 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 05:47:55.003206   51953 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 05:47:55.003455   51953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json ...
	I1210 05:47:55.003769   51953 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:47:55.003826   51953 start.go:360] acquireMachinesLock for functional-644034: {Name:mk0dde6f976baac8ab90670ad27c806ab702c4c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.003903   51953 start.go:364] duration metric: took 49.001µs to acquireMachinesLock for "functional-644034"
	I1210 05:47:55.003933   51953 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:47:55.003940   51953 fix.go:54] fixHost starting: 
	I1210 05:47:55.004094   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.004258   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:55.028659   51953 fix.go:112] recreateIfNeeded on functional-644034: state=Running err=<nil>
	W1210 05:47:55.028694   51953 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:47:55.031932   51953 out.go:252] * Updating the running docker "functional-644034" container ...
	I1210 05:47:55.031977   51953 machine.go:94] provisionDockerMachine start ...
	I1210 05:47:55.032062   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.055133   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.055465   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.055479   51953 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:47:55.170848   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.207999   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:47:55.208023   51953 ubuntu.go:182] provisioning hostname "functional-644034"
	I1210 05:47:55.208102   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.228767   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.229073   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.229085   51953 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-644034 && echo "functional-644034" | sudo tee /etc/hostname
	I1210 05:47:55.357858   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.390746   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:47:55.390831   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.434495   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.434811   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.434828   51953 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-644034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644034/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-644034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:47:55.523319   51953 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523359   51953 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523419   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:47:55.523430   51953 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 131.759µs
	I1210 05:47:55.523435   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 05:47:55.523445   51953 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 87.246µs
	I1210 05:47:55.523453   51953 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523438   51953 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:47:55.523449   51953 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523467   51953 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523481   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:47:55.523488   51953 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.262µs
	I1210 05:47:55.523494   51953 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:47:55.523503   51953 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523523   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 05:47:55.523531   51953 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 65.428µs
	I1210 05:47:55.523538   51953 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523542   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 05:47:55.523548   51953 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 45.473µs
	I1210 05:47:55.523554   51953 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 05:47:55.523548   51953 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523565   51953 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523317   51953 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523599   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 05:47:55.523607   51953 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.7µs
	I1210 05:47:55.523610   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 05:47:55.523613   51953 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 05:47:55.523600   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 05:47:55.523617   51953 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 70.203µs
	I1210 05:47:55.523622   51953 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 325.49µs
	I1210 05:47:55.523626   51953 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523628   51953 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523644   51953 cache.go:87] Successfully saved all images to host disk.
	I1210 05:47:55.587205   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:47:55.587232   51953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 05:47:55.587288   51953 ubuntu.go:190] setting up certificates
	I1210 05:47:55.587298   51953 provision.go:84] configureAuth start
	I1210 05:47:55.587369   51953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:47:55.604738   51953 provision.go:143] copyHostCerts
	I1210 05:47:55.604778   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:47:55.604816   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 05:47:55.604828   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:47:55.604905   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 05:47:55.605000   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:47:55.605022   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 05:47:55.605029   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:47:55.605061   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 05:47:55.605114   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:47:55.605134   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 05:47:55.605139   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:47:55.605169   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 05:47:55.605233   51953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.functional-644034 san=[127.0.0.1 192.168.49.2 functional-644034 localhost minikube]
	I1210 05:47:55.781276   51953 provision.go:177] copyRemoteCerts
	I1210 05:47:55.781365   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:47:55.781432   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.797956   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:55.902711   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 05:47:55.902771   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:47:55.919779   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 05:47:55.919840   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:47:55.936935   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 05:47:55.936994   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:47:55.953689   51953 provision.go:87] duration metric: took 366.363656ms to configureAuth
	I1210 05:47:55.953721   51953 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:47:55.953915   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:55.953927   51953 machine.go:97] duration metric: took 921.944178ms to provisionDockerMachine
	I1210 05:47:55.953936   51953 start.go:293] postStartSetup for "functional-644034" (driver="docker")
	I1210 05:47:55.953952   51953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:47:55.954004   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:47:55.954054   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.971130   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.075277   51953 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:47:56.078673   51953 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 05:47:56.078694   51953 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 05:47:56.078699   51953 command_runner.go:130] > VERSION_ID="12"
	I1210 05:47:56.078704   51953 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 05:47:56.078708   51953 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 05:47:56.078712   51953 command_runner.go:130] > ID=debian
	I1210 05:47:56.078717   51953 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 05:47:56.078725   51953 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 05:47:56.078732   51953 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 05:47:56.078800   51953 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:47:56.078828   51953 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:47:56.078840   51953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 05:47:56.078899   51953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 05:47:56.078986   51953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 05:47:56.078998   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1210 05:47:56.079103   51953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> hosts in /etc/test/nested/copy/4116
	I1210 05:47:56.079112   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> /etc/test/nested/copy/4116/hosts
	I1210 05:47:56.079156   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4116
	I1210 05:47:56.086554   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:47:56.104005   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts --> /etc/test/nested/copy/4116/hosts (40 bytes)
	I1210 05:47:56.121596   51953 start.go:296] duration metric: took 167.644644ms for postStartSetup
	I1210 05:47:56.121686   51953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:47:56.121728   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.138924   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.243468   51953 command_runner.go:130] > 14%
	I1210 05:47:56.243960   51953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:47:56.248281   51953 command_runner.go:130] > 169G
	I1210 05:47:56.248748   51953 fix.go:56] duration metric: took 1.244804723s for fixHost
	I1210 05:47:56.248771   51953 start.go:83] releasing machines lock for "functional-644034", held for 1.24485909s
	I1210 05:47:56.248837   51953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:47:56.266070   51953 ssh_runner.go:195] Run: cat /version.json
	I1210 05:47:56.266123   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.266146   51953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:47:56.266199   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.283872   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.284272   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.472387   51953 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 05:47:56.475023   51953 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 05:47:56.475222   51953 ssh_runner.go:195] Run: systemctl --version
	I1210 05:47:56.481051   51953 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 05:47:56.481144   51953 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 05:47:56.481557   51953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 05:47:56.485740   51953 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 05:47:56.485802   51953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:47:56.485889   51953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:47:56.493391   51953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:47:56.493413   51953 start.go:496] detecting cgroup driver to use...
	I1210 05:47:56.493443   51953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:47:56.493499   51953 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 05:47:56.508720   51953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:47:56.521711   51953 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:47:56.521777   51953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:47:56.537527   51953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:47:56.551315   51953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:47:56.656595   51953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:47:56.765354   51953 docker.go:234] disabling docker service ...
	I1210 05:47:56.765422   51953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:47:56.780352   51953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:47:56.793570   51953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:47:56.900961   51953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:47:57.025824   51953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:47:57.039104   51953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:47:57.052658   51953 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 05:47:57.053978   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.213891   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:47:57.223164   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 05:47:57.232001   51953 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:47:57.232070   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:47:57.240776   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:47:57.249302   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:47:57.258094   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:47:57.266381   51953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:47:57.274230   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:47:57.282766   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:47:57.291675   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:47:57.300542   51953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:47:57.307150   51953 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 05:47:57.308059   51953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:47:57.315237   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:57.433904   51953 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:47:57.552794   51953 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 05:47:57.552901   51953 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 05:47:57.556769   51953 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1210 05:47:57.556839   51953 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 05:47:57.556861   51953 command_runner.go:130] > Device: 0,73	Inode: 1614        Links: 1
	I1210 05:47:57.556893   51953 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:47:57.556921   51953 command_runner.go:130] > Access: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.556947   51953 command_runner.go:130] > Modify: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.556968   51953 command_runner.go:130] > Change: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.557011   51953 command_runner.go:130] >  Birth: -
	I1210 05:47:57.557078   51953 start.go:564] Will wait 60s for crictl version
	I1210 05:47:57.557155   51953 ssh_runner.go:195] Run: which crictl
	I1210 05:47:57.560538   51953 command_runner.go:130] > /usr/local/bin/crictl
	I1210 05:47:57.560706   51953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:47:57.582482   51953 command_runner.go:130] > Version:  0.1.0
	I1210 05:47:57.582585   51953 command_runner.go:130] > RuntimeName:  containerd
	I1210 05:47:57.582609   51953 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1210 05:47:57.582715   51953 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 05:47:57.584523   51953 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 05:47:57.584650   51953 ssh_runner.go:195] Run: containerd --version
	I1210 05:47:57.601892   51953 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 05:47:57.603507   51953 ssh_runner.go:195] Run: containerd --version
	I1210 05:47:57.622429   51953 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 05:47:57.630007   51953 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 05:47:57.632949   51953 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:47:57.648626   51953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:47:57.652604   51953 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 05:47:57.652711   51953 kubeadm.go:884] updating cluster {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:47:57.652889   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.820648   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.971830   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:58.124406   51953 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:47:58.124495   51953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:47:58.146688   51953 command_runner.go:130] > {
	I1210 05:47:58.146710   51953 command_runner.go:130] >   "images":  [
	I1210 05:47:58.146724   51953 command_runner.go:130] >     {
	I1210 05:47:58.146735   51953 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1210 05:47:58.146741   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146747   51953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 05:47:58.146750   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146755   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146765   51953 command_runner.go:130] >       "size":  "8032639",
	I1210 05:47:58.146779   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146784   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146790   51953 command_runner.go:130] >     },
	I1210 05:47:58.146794   51953 command_runner.go:130] >     {
	I1210 05:47:58.146801   51953 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 05:47:58.146808   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146813   51953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 05:47:58.146817   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146821   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146830   51953 command_runner.go:130] >       "size":  "21166088",
	I1210 05:47:58.146837   51953 command_runner.go:130] >       "username":  "nonroot",
	I1210 05:47:58.146841   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146844   51953 command_runner.go:130] >     },
	I1210 05:47:58.146847   51953 command_runner.go:130] >     {
	I1210 05:47:58.146855   51953 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1210 05:47:58.146861   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146867   51953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1210 05:47:58.146873   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146878   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146885   51953 command_runner.go:130] >       "size":  "21748497",
	I1210 05:47:58.146888   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.146897   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.146904   51953 command_runner.go:130] >       },
	I1210 05:47:58.146908   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146912   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146917   51953 command_runner.go:130] >     },
	I1210 05:47:58.146925   51953 command_runner.go:130] >     {
	I1210 05:47:58.146933   51953 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1210 05:47:58.146939   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146948   51953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1210 05:47:58.146955   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146959   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146964   51953 command_runner.go:130] >       "size":  "24690149",
	I1210 05:47:58.146967   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.146972   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.146975   51953 command_runner.go:130] >       },
	I1210 05:47:58.146979   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146985   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146990   51953 command_runner.go:130] >     },
	I1210 05:47:58.146996   51953 command_runner.go:130] >     {
	I1210 05:47:58.147003   51953 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1210 05:47:58.147007   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147030   51953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1210 05:47:58.147034   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147038   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147042   51953 command_runner.go:130] >       "size":  "20670083",
	I1210 05:47:58.147046   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147050   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.147056   51953 command_runner.go:130] >       },
	I1210 05:47:58.147060   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147067   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147070   51953 command_runner.go:130] >     },
	I1210 05:47:58.147081   51953 command_runner.go:130] >     {
	I1210 05:47:58.147088   51953 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1210 05:47:58.147092   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147099   51953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1210 05:47:58.147103   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147107   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147111   51953 command_runner.go:130] >       "size":  "22430795",
	I1210 05:47:58.147122   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147127   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147132   51953 command_runner.go:130] >     },
	I1210 05:47:58.147135   51953 command_runner.go:130] >     {
	I1210 05:47:58.147144   51953 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1210 05:47:58.147150   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147155   51953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1210 05:47:58.147161   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147173   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147180   51953 command_runner.go:130] >       "size":  "15403461",
	I1210 05:47:58.147183   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147187   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.147190   51953 command_runner.go:130] >       },
	I1210 05:47:58.147194   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147198   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147205   51953 command_runner.go:130] >     },
	I1210 05:47:58.147208   51953 command_runner.go:130] >     {
	I1210 05:47:58.147215   51953 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 05:47:58.147221   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147226   51953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 05:47:58.147232   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147236   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147248   51953 command_runner.go:130] >       "size":  "265458",
	I1210 05:47:58.147252   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147256   51953 command_runner.go:130] >         "value":  "65535"
	I1210 05:47:58.147259   51953 command_runner.go:130] >       },
	I1210 05:47:58.147270   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147274   51953 command_runner.go:130] >       "pinned":  true
	I1210 05:47:58.147277   51953 command_runner.go:130] >     }
	I1210 05:47:58.147282   51953 command_runner.go:130] >   ]
	I1210 05:47:58.147284   51953 command_runner.go:130] > }
	I1210 05:47:58.149521   51953 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 05:47:58.149540   51953 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:47:58.149552   51953 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1210 05:47:58.149645   51953 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:47:58.149706   51953 ssh_runner.go:195] Run: sudo crictl info
	I1210 05:47:58.176587   51953 command_runner.go:130] > {
	I1210 05:47:58.176610   51953 command_runner.go:130] >   "cniconfig": {
	I1210 05:47:58.176616   51953 command_runner.go:130] >     "Networks": [
	I1210 05:47:58.176620   51953 command_runner.go:130] >       {
	I1210 05:47:58.176624   51953 command_runner.go:130] >         "Config": {
	I1210 05:47:58.176629   51953 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1210 05:47:58.176644   51953 command_runner.go:130] >           "Name": "cni-loopback",
	I1210 05:47:58.176648   51953 command_runner.go:130] >           "Plugins": [
	I1210 05:47:58.176652   51953 command_runner.go:130] >             {
	I1210 05:47:58.176657   51953 command_runner.go:130] >               "Network": {
	I1210 05:47:58.176662   51953 command_runner.go:130] >                 "ipam": {},
	I1210 05:47:58.176673   51953 command_runner.go:130] >                 "type": "loopback"
	I1210 05:47:58.176678   51953 command_runner.go:130] >               },
	I1210 05:47:58.176687   51953 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1210 05:47:58.176691   51953 command_runner.go:130] >             }
	I1210 05:47:58.176694   51953 command_runner.go:130] >           ],
	I1210 05:47:58.176704   51953 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1210 05:47:58.176717   51953 command_runner.go:130] >         },
	I1210 05:47:58.176725   51953 command_runner.go:130] >         "IFName": "lo"
	I1210 05:47:58.176728   51953 command_runner.go:130] >       }
	I1210 05:47:58.176732   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176736   51953 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1210 05:47:58.176742   51953 command_runner.go:130] >     "PluginDirs": [
	I1210 05:47:58.176746   51953 command_runner.go:130] >       "/opt/cni/bin"
	I1210 05:47:58.176752   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176756   51953 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1210 05:47:58.176771   51953 command_runner.go:130] >     "Prefix": "eth"
	I1210 05:47:58.176775   51953 command_runner.go:130] >   },
	I1210 05:47:58.176782   51953 command_runner.go:130] >   "config": {
	I1210 05:47:58.176789   51953 command_runner.go:130] >     "cdiSpecDirs": [
	I1210 05:47:58.176793   51953 command_runner.go:130] >       "/etc/cdi",
	I1210 05:47:58.176797   51953 command_runner.go:130] >       "/var/run/cdi"
	I1210 05:47:58.176803   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176807   51953 command_runner.go:130] >     "cni": {
	I1210 05:47:58.176813   51953 command_runner.go:130] >       "binDir": "",
	I1210 05:47:58.176817   51953 command_runner.go:130] >       "binDirs": [
	I1210 05:47:58.176821   51953 command_runner.go:130] >         "/opt/cni/bin"
	I1210 05:47:58.176825   51953 command_runner.go:130] >       ],
	I1210 05:47:58.176836   51953 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1210 05:47:58.176840   51953 command_runner.go:130] >       "confTemplate": "",
	I1210 05:47:58.176844   51953 command_runner.go:130] >       "ipPref": "",
	I1210 05:47:58.176850   51953 command_runner.go:130] >       "maxConfNum": 1,
	I1210 05:47:58.176854   51953 command_runner.go:130] >       "setupSerially": false,
	I1210 05:47:58.176861   51953 command_runner.go:130] >       "useInternalLoopback": false
	I1210 05:47:58.176864   51953 command_runner.go:130] >     },
	I1210 05:47:58.176874   51953 command_runner.go:130] >     "containerd": {
	I1210 05:47:58.176880   51953 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1210 05:47:58.176886   51953 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1210 05:47:58.176892   51953 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1210 05:47:58.176901   51953 command_runner.go:130] >       "runtimes": {
	I1210 05:47:58.176905   51953 command_runner.go:130] >         "runc": {
	I1210 05:47:58.176909   51953 command_runner.go:130] >           "ContainerAnnotations": null,
	I1210 05:47:58.176915   51953 command_runner.go:130] >           "PodAnnotations": null,
	I1210 05:47:58.176920   51953 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1210 05:47:58.176926   51953 command_runner.go:130] >           "cgroupWritable": false,
	I1210 05:47:58.176930   51953 command_runner.go:130] >           "cniConfDir": "",
	I1210 05:47:58.176934   51953 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1210 05:47:58.176939   51953 command_runner.go:130] >           "io_type": "",
	I1210 05:47:58.176943   51953 command_runner.go:130] >           "options": {
	I1210 05:47:58.176950   51953 command_runner.go:130] >             "BinaryName": "",
	I1210 05:47:58.176955   51953 command_runner.go:130] >             "CriuImagePath": "",
	I1210 05:47:58.176970   51953 command_runner.go:130] >             "CriuWorkPath": "",
	I1210 05:47:58.176977   51953 command_runner.go:130] >             "IoGid": 0,
	I1210 05:47:58.176981   51953 command_runner.go:130] >             "IoUid": 0,
	I1210 05:47:58.176985   51953 command_runner.go:130] >             "NoNewKeyring": false,
	I1210 05:47:58.176991   51953 command_runner.go:130] >             "Root": "",
	I1210 05:47:58.176995   51953 command_runner.go:130] >             "ShimCgroup": "",
	I1210 05:47:58.177002   51953 command_runner.go:130] >             "SystemdCgroup": false
	I1210 05:47:58.177005   51953 command_runner.go:130] >           },
	I1210 05:47:58.177011   51953 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1210 05:47:58.177019   51953 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1210 05:47:58.177023   51953 command_runner.go:130] >           "runtimePath": "",
	I1210 05:47:58.177030   51953 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1210 05:47:58.177035   51953 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1210 05:47:58.177041   51953 command_runner.go:130] >           "snapshotter": ""
	I1210 05:47:58.177044   51953 command_runner.go:130] >         }
	I1210 05:47:58.177049   51953 command_runner.go:130] >       }
	I1210 05:47:58.177052   51953 command_runner.go:130] >     },
	I1210 05:47:58.177065   51953 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1210 05:47:58.177073   51953 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1210 05:47:58.177078   51953 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1210 05:47:58.177083   51953 command_runner.go:130] >     "disableApparmor": false,
	I1210 05:47:58.177090   51953 command_runner.go:130] >     "disableHugetlbController": true,
	I1210 05:47:58.177094   51953 command_runner.go:130] >     "disableProcMount": false,
	I1210 05:47:58.177098   51953 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1210 05:47:58.177102   51953 command_runner.go:130] >     "enableCDI": true,
	I1210 05:47:58.177106   51953 command_runner.go:130] >     "enableSelinux": false,
	I1210 05:47:58.177114   51953 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1210 05:47:58.177118   51953 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1210 05:47:58.177125   51953 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1210 05:47:58.177130   51953 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1210 05:47:58.177138   51953 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1210 05:47:58.177142   51953 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1210 05:47:58.177147   51953 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1210 05:47:58.177160   51953 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1210 05:47:58.177170   51953 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1210 05:47:58.177176   51953 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1210 05:47:58.177186   51953 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1210 05:47:58.177190   51953 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1210 05:47:58.177193   51953 command_runner.go:130] >   },
	I1210 05:47:58.177197   51953 command_runner.go:130] >   "features": {
	I1210 05:47:58.177201   51953 command_runner.go:130] >     "supplemental_groups_policy": true
	I1210 05:47:58.177204   51953 command_runner.go:130] >   },
	I1210 05:47:58.177209   51953 command_runner.go:130] >   "golang": "go1.24.9",
	I1210 05:47:58.177221   51953 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 05:47:58.177233   51953 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 05:47:58.177237   51953 command_runner.go:130] >   "runtimeHandlers": [
	I1210 05:47:58.177246   51953 command_runner.go:130] >     {
	I1210 05:47:58.177250   51953 command_runner.go:130] >       "features": {
	I1210 05:47:58.177255   51953 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 05:47:58.177259   51953 command_runner.go:130] >         "user_namespaces": true
	I1210 05:47:58.177261   51953 command_runner.go:130] >       }
	I1210 05:47:58.177264   51953 command_runner.go:130] >     },
	I1210 05:47:58.177267   51953 command_runner.go:130] >     {
	I1210 05:47:58.177271   51953 command_runner.go:130] >       "features": {
	I1210 05:47:58.177275   51953 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 05:47:58.177279   51953 command_runner.go:130] >         "user_namespaces": true
	I1210 05:47:58.177282   51953 command_runner.go:130] >       },
	I1210 05:47:58.177287   51953 command_runner.go:130] >       "name": "runc"
	I1210 05:47:58.177289   51953 command_runner.go:130] >     }
	I1210 05:47:58.177293   51953 command_runner.go:130] >   ],
	I1210 05:47:58.177296   51953 command_runner.go:130] >   "status": {
	I1210 05:47:58.177300   51953 command_runner.go:130] >     "conditions": [
	I1210 05:47:58.177303   51953 command_runner.go:130] >       {
	I1210 05:47:58.177307   51953 command_runner.go:130] >         "message": "",
	I1210 05:47:58.177314   51953 command_runner.go:130] >         "reason": "",
	I1210 05:47:58.177318   51953 command_runner.go:130] >         "status": true,
	I1210 05:47:58.177329   51953 command_runner.go:130] >         "type": "RuntimeReady"
	I1210 05:47:58.177335   51953 command_runner.go:130] >       },
	I1210 05:47:58.177339   51953 command_runner.go:130] >       {
	I1210 05:47:58.177345   51953 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1210 05:47:58.177356   51953 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1210 05:47:58.177360   51953 command_runner.go:130] >         "status": false,
	I1210 05:47:58.177365   51953 command_runner.go:130] >         "type": "NetworkReady"
	I1210 05:47:58.177373   51953 command_runner.go:130] >       },
	I1210 05:47:58.177376   51953 command_runner.go:130] >       {
	I1210 05:47:58.177397   51953 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1210 05:47:58.177406   51953 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1210 05:47:58.177414   51953 command_runner.go:130] >         "status": false,
	I1210 05:47:58.177420   51953 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1210 05:47:58.177425   51953 command_runner.go:130] >       }
	I1210 05:47:58.177428   51953 command_runner.go:130] >     ]
	I1210 05:47:58.177431   51953 command_runner.go:130] >   }
	I1210 05:47:58.177434   51953 command_runner.go:130] > }
	I1210 05:47:58.177746   51953 cni.go:84] Creating CNI manager for ""
	I1210 05:47:58.177762   51953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:47:58.177786   51953 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:47:58.177809   51953 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644034 NodeName:functional-644034 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:47:58.177931   51953 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-644034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:47:58.178005   51953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:47:58.184894   51953 command_runner.go:130] > kubeadm
	I1210 05:47:58.184912   51953 command_runner.go:130] > kubectl
	I1210 05:47:58.184916   51953 command_runner.go:130] > kubelet
	I1210 05:47:58.185786   51953 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:47:58.185866   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:47:58.193140   51953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 05:47:58.205426   51953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:47:58.217773   51953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 05:47:58.230424   51953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:47:58.234124   51953 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 05:47:58.234224   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:58.348721   51953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:47:58.367663   51953 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034 for IP: 192.168.49.2
	I1210 05:47:58.367683   51953 certs.go:195] generating shared ca certs ...
	I1210 05:47:58.367699   51953 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:58.367828   51953 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 05:47:58.367870   51953 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 05:47:58.367878   51953 certs.go:257] generating profile certs ...
	I1210 05:47:58.367976   51953 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key
	I1210 05:47:58.368034   51953 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c
	I1210 05:47:58.368079   51953 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key
	I1210 05:47:58.368088   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 05:47:58.368100   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 05:47:58.368115   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 05:47:58.368126   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 05:47:58.368137   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 05:47:58.368148   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 05:47:58.368163   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 05:47:58.368174   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 05:47:58.368220   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 05:47:58.368248   51953 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 05:47:58.368256   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:47:58.368286   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:47:58.368309   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:47:58.368331   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 05:47:58.368373   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:47:58.368402   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.368414   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.368427   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.368978   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:47:58.388893   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:47:58.409416   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:47:58.428450   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:47:58.446489   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:47:58.465644   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:47:58.483264   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:47:58.500807   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:47:58.518107   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 05:47:58.536070   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 05:47:58.553632   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:47:58.571692   51953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:47:58.584898   51953 ssh_runner.go:195] Run: openssl version
	I1210 05:47:58.590608   51953 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 05:47:58.591139   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.599076   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 05:47:58.606632   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610200   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610255   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610308   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.650574   51953 command_runner.go:130] > 51391683
	I1210 05:47:58.651004   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:47:58.658249   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.665388   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 05:47:58.672651   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676295   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676329   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676381   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.716661   51953 command_runner.go:130] > 3ec20f2e
	I1210 05:47:58.717156   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:47:58.724496   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.731755   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:47:58.739224   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742739   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742773   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742827   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.783109   51953 command_runner.go:130] > b5213941
	I1210 05:47:58.783531   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:47:58.790793   51953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:47:58.794232   51953 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:47:58.794258   51953 command_runner.go:130] >   Size: 1172      	Blocks: 8          IO Block: 4096   regular file
	I1210 05:47:58.794265   51953 command_runner.go:130] > Device: 259,1	Inode: 1307887     Links: 1
	I1210 05:47:58.794272   51953 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:47:58.794286   51953 command_runner.go:130] > Access: 2025-12-10 05:43:51.022657545 +0000
	I1210 05:47:58.794292   51953 command_runner.go:130] > Modify: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794297   51953 command_runner.go:130] > Change: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794305   51953 command_runner.go:130] >  Birth: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794558   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:47:58.837377   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.837465   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:47:58.877636   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.878121   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:47:58.918797   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.919235   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:47:58.959487   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.960010   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:47:59.003251   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:59.003763   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:47:59.044279   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:59.044747   51953 kubeadm.go:401] StartCluster: {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:59.044823   51953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 05:47:59.044887   51953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:47:59.069970   51953 cri.go:89] found id: ""
	I1210 05:47:59.070038   51953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:47:59.076652   51953 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 05:47:59.076673   51953 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 05:47:59.076679   51953 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 05:47:59.077535   51953 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:47:59.077555   51953 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:47:59.077617   51953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:47:59.084671   51953 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:47:59.085448   51953 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-644034" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.085850   51953 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "functional-644034" cluster setting kubeconfig missing "functional-644034" context setting]
	I1210 05:47:59.086310   51953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.087190   51953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.087371   51953 kapi.go:59] client config for functional-644034: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key", CAFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:47:59.088034   51953 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 05:47:59.088055   51953 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 05:47:59.088068   51953 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 05:47:59.088074   51953 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 05:47:59.088078   51953 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 05:47:59.088429   51953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:47:59.089407   51953 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:47:59.096980   51953 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 05:47:59.097014   51953 kubeadm.go:602] duration metric: took 19.453757ms to restartPrimaryControlPlane
	I1210 05:47:59.097024   51953 kubeadm.go:403] duration metric: took 52.281886ms to StartCluster
	I1210 05:47:59.097064   51953 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.097152   51953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.097734   51953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.097941   51953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 05:47:59.098267   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:59.098318   51953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 05:47:59.098380   51953 addons.go:70] Setting storage-provisioner=true in profile "functional-644034"
	I1210 05:47:59.098393   51953 addons.go:239] Setting addon storage-provisioner=true in "functional-644034"
	I1210 05:47:59.098419   51953 host.go:66] Checking if "functional-644034" exists ...
	I1210 05:47:59.098907   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.101905   51953 out.go:179] * Verifying Kubernetes components...
	I1210 05:47:59.106662   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:59.109785   51953 addons.go:70] Setting default-storageclass=true in profile "functional-644034"
	I1210 05:47:59.109823   51953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-644034"
	I1210 05:47:59.110155   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.137186   51953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:47:59.140065   51953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:47:59.140094   51953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:47:59.140172   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:59.152137   51953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.152308   51953 kapi.go:59] client config for functional-644034: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key", CAFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:47:59.152605   51953 addons.go:239] Setting addon default-storageclass=true in "functional-644034"
	I1210 05:47:59.152636   51953 host.go:66] Checking if "functional-644034" exists ...
	I1210 05:47:59.153047   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.173160   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:59.202277   51953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:47:59.202307   51953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:47:59.202368   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:59.232670   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:59.321380   51953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:47:59.337472   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:47:59.374986   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:00.169551   51953 node_ready.go:35] waiting up to 6m0s for node "functional-644034" to be "Ready" ...
	I1210 05:48:00.169689   51953 type.go:168] "Request Body" body=""
	I1210 05:48:00.169752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:00.170008   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.170051   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170077   51953 retry.go:31] will retry after 139.03743ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170121   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.170135   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170145   51953 retry.go:31] will retry after 348.331986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:00.310507   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:00.415931   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.416069   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.416135   51953 retry.go:31] will retry after 233.204425ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.519312   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:00.585157   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.585240   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.585274   51953 retry.go:31] will retry after 499.606359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.650447   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:00.669993   51953 type.go:168] "Request Body" body=""
	I1210 05:48:00.670098   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:00.670433   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:00.712181   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.715417   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.715449   51953 retry.go:31] will retry after 781.025556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.086035   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:01.148055   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.148095   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.148115   51953 retry.go:31] will retry after 644.355236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.170281   51953 type.go:168] "Request Body" body=""
	I1210 05:48:01.170372   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:01.170734   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:01.497246   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:01.552133   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.555247   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.555278   51953 retry.go:31] will retry after 1.200680207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.670555   51953 type.go:168] "Request Body" body=""
	I1210 05:48:01.670646   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:01.670959   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:01.793341   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:01.851452   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.854727   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.854768   51953 retry.go:31] will retry after 727.381606ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.170188   51953 type.go:168] "Request Body" body=""
	I1210 05:48:02.170290   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:02.170618   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:02.170696   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:02.583237   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:02.649935   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:02.649981   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.650022   51953 retry.go:31] will retry after 1.310515996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.670155   51953 type.go:168] "Request Body" body=""
	I1210 05:48:02.670292   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:02.670651   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:02.757075   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:02.818837   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:02.821796   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.821831   51953 retry.go:31] will retry after 1.687874073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:03.170317   51953 type.go:168] "Request Body" body=""
	I1210 05:48:03.170406   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:03.170707   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:03.670505   51953 type.go:168] "Request Body" body=""
	I1210 05:48:03.670583   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:03.670925   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:03.961404   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:04.024244   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:04.024282   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.024323   51953 retry.go:31] will retry after 1.628415395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.170524   51953 type.go:168] "Request Body" body=""
	I1210 05:48:04.170651   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:04.171067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:04.171129   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:04.510724   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:04.566617   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:04.570030   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.570064   51953 retry.go:31] will retry after 2.695563296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.670310   51953 type.go:168] "Request Body" body=""
	I1210 05:48:04.670389   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:04.670711   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.170563   51953 type.go:168] "Request Body" body=""
	I1210 05:48:05.170635   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:05.170967   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.653658   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:05.670351   51953 type.go:168] "Request Body" body=""
	I1210 05:48:05.670461   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:05.670799   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.744168   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:05.744207   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:05.744248   51953 retry.go:31] will retry after 1.470532715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:06.169848   51953 type.go:168] "Request Body" body=""
	I1210 05:48:06.169975   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:06.170317   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:06.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:48:06.669921   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:06.670264   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:06.670329   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:07.169978   51953 type.go:168] "Request Body" body=""
	I1210 05:48:07.170058   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:07.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:07.215626   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:07.266052   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:07.280336   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:07.280370   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.280387   51953 retry.go:31] will retry after 5.58106306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.333195   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:07.333236   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.333256   51953 retry.go:31] will retry after 2.610344026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.670753   51953 type.go:168] "Request Body" body=""
	I1210 05:48:07.670832   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:07.671195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:08.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:48:08.169912   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:08.170281   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:08.669773   51953 type.go:168] "Request Body" body=""
	I1210 05:48:08.669870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:08.670131   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:09.170132   51953 type.go:168] "Request Body" body=""
	I1210 05:48:09.170205   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:09.170536   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:09.170594   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:09.670237   51953 type.go:168] "Request Body" body=""
	I1210 05:48:09.670311   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:09.670667   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:09.944159   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:10.010561   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:10.010619   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:10.010642   51953 retry.go:31] will retry after 2.5620788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:10.169787   51953 type.go:168] "Request Body" body=""
	I1210 05:48:10.169854   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:10.170167   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:10.669895   51953 type.go:168] "Request Body" body=""
	I1210 05:48:10.669974   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:10.670363   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:11.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:48:11.169913   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:11.170234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:11.669750   51953 type.go:168] "Request Body" body=""
	I1210 05:48:11.669833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:11.670159   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:11.670233   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:12.169956   51953 type.go:168] "Request Body" body=""
	I1210 05:48:12.170030   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:12.170375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:12.572886   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:12.631295   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:12.634400   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.634432   51953 retry.go:31] will retry after 5.90622422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.670736   51953 type.go:168] "Request Body" body=""
	I1210 05:48:12.670808   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:12.671172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:12.862533   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:12.918893   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:12.918929   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.918949   51953 retry.go:31] will retry after 8.272023324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:13.170464   51953 type.go:168] "Request Body" body=""
	I1210 05:48:13.170532   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:13.170809   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:13.670589   51953 type.go:168] "Request Body" body=""
	I1210 05:48:13.670665   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:13.670979   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:13.671051   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:14.170623   51953 type.go:168] "Request Body" body=""
	I1210 05:48:14.170704   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:14.171052   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:14.669975   51953 type.go:168] "Request Body" body=""
	I1210 05:48:14.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:14.670351   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:15.170046   51953 type.go:168] "Request Body" body=""
	I1210 05:48:15.170119   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:15.170417   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:15.670099   51953 type.go:168] "Request Body" body=""
	I1210 05:48:15.670181   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:15.670515   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:16.169772   51953 type.go:168] "Request Body" body=""
	I1210 05:48:16.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:16.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:16.170210   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:16.669859   51953 type.go:168] "Request Body" body=""
	I1210 05:48:16.669945   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:16.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:17.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:48:17.169889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:17.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:17.669877   51953 type.go:168] "Request Body" body=""
	I1210 05:48:17.669969   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:17.670225   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:18.169971   51953 type.go:168] "Request Body" body=""
	I1210 05:48:18.170045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:18.170383   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:18.170445   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:18.540818   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:18.598871   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:18.601811   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:18.601841   51953 retry.go:31] will retry after 12.747843498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:18.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:18.670272   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:18.670582   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:19.170370   51953 type.go:168] "Request Body" body=""
	I1210 05:48:19.170452   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:19.170779   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:19.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:48:19.670779   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:19.671114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:20.169841   51953 type.go:168] "Request Body" body=""
	I1210 05:48:20.169920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:20.170286   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:20.669765   51953 type.go:168] "Request Body" body=""
	I1210 05:48:20.669841   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:20.670151   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:20.670209   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:21.169914   51953 type.go:168] "Request Body" body=""
	I1210 05:48:21.169987   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:21.170308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:21.191680   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:21.254244   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:21.254291   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:21.254309   51953 retry.go:31] will retry after 13.504528238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:21.669784   51953 type.go:168] "Request Body" body=""
	I1210 05:48:21.669884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:21.670234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:22.169864   51953 type.go:168] "Request Body" body=""
	I1210 05:48:22.169979   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:22.170274   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:22.670052   51953 type.go:168] "Request Body" body=""
	I1210 05:48:22.670132   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:22.670457   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:22.670511   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:23.170156   51953 type.go:168] "Request Body" body=""
	I1210 05:48:23.170275   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:23.170563   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:23.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:48:23.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:23.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:24.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:48:24.169911   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:24.170218   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:24.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:24.670237   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:24.670543   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:24.670597   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:25.170342   51953 type.go:168] "Request Body" body=""
	I1210 05:48:25.170412   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:25.170680   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:25.670543   51953 type.go:168] "Request Body" body=""
	I1210 05:48:25.670623   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:25.670919   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:26.170671   51953 type.go:168] "Request Body" body=""
	I1210 05:48:26.170749   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:26.171110   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:26.669682   51953 type.go:168] "Request Body" body=""
	I1210 05:48:26.669752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:26.670007   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:27.170402   51953 type.go:168] "Request Body" body=""
	I1210 05:48:27.170479   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:27.170798   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:27.170859   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:27.670357   51953 type.go:168] "Request Body" body=""
	I1210 05:48:27.670437   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:27.670773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:28.170551   51953 type.go:168] "Request Body" body=""
	I1210 05:48:28.170643   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:28.170896   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:28.670265   51953 type.go:168] "Request Body" body=""
	I1210 05:48:28.670338   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:28.670758   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:29.170472   51953 type.go:168] "Request Body" body=""
	I1210 05:48:29.170542   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:29.170877   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:29.170933   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:29.669736   51953 type.go:168] "Request Body" body=""
	I1210 05:48:29.669810   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:29.670135   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:30.169864   51953 type.go:168] "Request Body" body=""
	I1210 05:48:30.169940   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:30.170305   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:30.669879   51953 type.go:168] "Request Body" body=""
	I1210 05:48:30.669957   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:30.670279   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:31.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:48:31.169834   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:31.170092   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:31.350447   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:31.407735   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:31.410898   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:31.410931   51953 retry.go:31] will retry after 18.518112559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:31.670455   51953 type.go:168] "Request Body" body=""
	I1210 05:48:31.670542   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:31.670952   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:31.671005   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:32.170764   51953 type.go:168] "Request Body" body=""
	I1210 05:48:32.170837   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:32.171167   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:32.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:48:32.669900   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:32.670158   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:33.169858   51953 type.go:168] "Request Body" body=""
	I1210 05:48:33.169936   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:33.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:33.669974   51953 type.go:168] "Request Body" body=""
	I1210 05:48:33.670051   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:33.670366   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:34.170663   51953 type.go:168] "Request Body" body=""
	I1210 05:48:34.170730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:34.171001   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:34.171083   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:34.670065   51953 type.go:168] "Request Body" body=""
	I1210 05:48:34.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:34.670459   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:34.759888   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:34.813991   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:34.817148   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:34.817180   51953 retry.go:31] will retry after 7.858877757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:35.170714   51953 type.go:168] "Request Body" body=""
	I1210 05:48:35.170783   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:35.171144   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:35.669794   51953 type.go:168] "Request Body" body=""
	I1210 05:48:35.669884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:35.670145   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:36.169862   51953 type.go:168] "Request Body" body=""
	I1210 05:48:36.169932   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:36.170264   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:36.669949   51953 type.go:168] "Request Body" body=""
	I1210 05:48:36.670019   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:36.670336   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:36.670392   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:37.170023   51953 type.go:168] "Request Body" body=""
	I1210 05:48:37.170089   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:37.170351   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:37.670112   51953 type.go:168] "Request Body" body=""
	I1210 05:48:37.670187   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:37.670504   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:38.170212   51953 type.go:168] "Request Body" body=""
	I1210 05:48:38.170304   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:38.170601   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:38.670326   51953 type.go:168] "Request Body" body=""
	I1210 05:48:38.670390   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:38.670677   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:38.670718   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:39.170413   51953 type.go:168] "Request Body" body=""
	I1210 05:48:39.170487   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:39.170808   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:39.669722   51953 type.go:168] "Request Body" body=""
	I1210 05:48:39.669794   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:39.670121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:40.169742   51953 type.go:168] "Request Body" body=""
	I1210 05:48:40.169816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:40.170090   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:40.669786   51953 type.go:168] "Request Body" body=""
	I1210 05:48:40.669863   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:40.670230   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:41.169931   51953 type.go:168] "Request Body" body=""
	I1210 05:48:41.170003   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:41.170334   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:41.170388   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:41.670036   51953 type.go:168] "Request Body" body=""
	I1210 05:48:41.670109   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:41.670415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.170132   51953 type.go:168] "Request Body" body=""
	I1210 05:48:42.170213   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:42.170632   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.670451   51953 type.go:168] "Request Body" body=""
	I1210 05:48:42.670533   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:42.670872   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.677131   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:42.736218   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:42.736261   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:42.736279   51953 retry.go:31] will retry after 23.425189001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:43.170386   51953 type.go:168] "Request Body" body=""
	I1210 05:48:43.170464   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:43.170737   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:43.170779   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:43.670538   51953 type.go:168] "Request Body" body=""
	I1210 05:48:43.670609   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:43.670906   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:44.170640   51953 type.go:168] "Request Body" body=""
	I1210 05:48:44.170719   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:44.171057   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:44.669915   51953 type.go:168] "Request Body" body=""
	I1210 05:48:44.669983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:44.670265   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:45.170036   51953 type.go:168] "Request Body" body=""
	I1210 05:48:45.170123   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:45.175201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1210 05:48:45.175287   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:45.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:48:45.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:45.670195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:46.170498   51953 type.go:168] "Request Body" body=""
	I1210 05:48:46.170576   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:46.170876   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:46.670607   51953 type.go:168] "Request Body" body=""
	I1210 05:48:46.670701   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:46.671031   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:47.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:48:47.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:47.170154   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:47.669733   51953 type.go:168] "Request Body" body=""
	I1210 05:48:47.669806   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:47.670071   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:47.670117   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:48.169793   51953 type.go:168] "Request Body" body=""
	I1210 05:48:48.169879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:48.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:48.669835   51953 type.go:168] "Request Body" body=""
	I1210 05:48:48.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:48.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:49.170055   51953 type.go:168] "Request Body" body=""
	I1210 05:48:49.170124   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:49.170378   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:49.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:49.670235   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:49.670525   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:49.670571   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:49.930022   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:49.989791   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:49.993079   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:49.993114   51953 retry.go:31] will retry after 23.38662002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:50.170615   51953 type.go:168] "Request Body" body=""
	I1210 05:48:50.170692   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:50.171002   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:50.669688   51953 type.go:168] "Request Body" body=""
	I1210 05:48:50.669757   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:50.670060   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:51.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:48:51.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:51.170229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:51.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:48:51.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:51.670261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:52.169845   51953 type.go:168] "Request Body" body=""
	I1210 05:48:52.169924   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:52.170187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:52.170237   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:52.669804   51953 type.go:168] "Request Body" body=""
	I1210 05:48:52.669873   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:52.670187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:53.169870   51953 type.go:168] "Request Body" body=""
	I1210 05:48:53.169941   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:53.170273   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:53.669765   51953 type.go:168] "Request Body" body=""
	I1210 05:48:53.669863   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:53.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:54.169803   51953 type.go:168] "Request Body" body=""
	I1210 05:48:54.169877   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:54.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:54.170270   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:54.670065   51953 type.go:168] "Request Body" body=""
	I1210 05:48:54.670136   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:54.670470   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:55.169793   51953 type.go:168] "Request Body" body=""
	I1210 05:48:55.169876   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:55.170142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:55.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:48:55.669919   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:55.670247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:56.169832   51953 type.go:168] "Request Body" body=""
	I1210 05:48:56.169907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:56.170229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:56.669896   51953 type.go:168] "Request Body" body=""
	I1210 05:48:56.669967   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:56.670287   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:56.670338   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:57.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:48:57.169898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:57.170223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:57.669803   51953 type.go:168] "Request Body" body=""
	I1210 05:48:57.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:57.670238   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:58.169908   51953 type.go:168] "Request Body" body=""
	I1210 05:48:58.169985   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:58.170322   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:58.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:48:58.670098   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:58.670445   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:58.670497   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:59.170301   51953 type.go:168] "Request Body" body=""
	I1210 05:48:59.170378   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:59.170749   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:59.670557   51953 type.go:168] "Request Body" body=""
	I1210 05:48:59.670633   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:59.670949   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:00.169725   51953 type.go:168] "Request Body" body=""
	I1210 05:49:00.169813   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:00.170141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:00.670083   51953 type.go:168] "Request Body" body=""
	I1210 05:49:00.670159   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:00.670486   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:00.670533   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:01.169951   51953 type.go:168] "Request Body" body=""
	I1210 05:49:01.170038   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:01.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:01.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:01.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:01.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:02.169846   51953 type.go:168] "Request Body" body=""
	I1210 05:49:02.169918   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:02.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:02.669747   51953 type.go:168] "Request Body" body=""
	I1210 05:49:02.669829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:02.670143   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:03.169862   51953 type.go:168] "Request Body" body=""
	I1210 05:49:03.169937   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:03.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:03.170307   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:03.669983   51953 type.go:168] "Request Body" body=""
	I1210 05:49:03.670055   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:03.670401   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:04.169978   51953 type.go:168] "Request Body" body=""
	I1210 05:49:04.170070   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:04.170429   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:04.670184   51953 type.go:168] "Request Body" body=""
	I1210 05:49:04.670254   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:04.670541   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:05.169853   51953 type.go:168] "Request Body" body=""
	I1210 05:49:05.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:05.170261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:05.669805   51953 type.go:168] "Request Body" body=""
	I1210 05:49:05.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:05.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:05.670209   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:06.161707   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:49:06.170118   51953 type.go:168] "Request Body" body=""
	I1210 05:49:06.170187   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:06.170454   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:06.215983   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:06.219418   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:06.219449   51953 retry.go:31] will retry after 38.750779649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:06.669785   51953 type.go:168] "Request Body" body=""
	I1210 05:49:06.669865   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:06.670186   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:07.169875   51953 type.go:168] "Request Body" body=""
	I1210 05:49:07.169941   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:07.170192   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:07.669935   51953 type.go:168] "Request Body" body=""
	I1210 05:49:07.670005   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:07.670350   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:07.670403   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:08.170063   51953 type.go:168] "Request Body" body=""
	I1210 05:49:08.170142   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:08.170510   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:08.670188   51953 type.go:168] "Request Body" body=""
	I1210 05:49:08.670268   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:08.670583   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:09.170358   51953 type.go:168] "Request Body" body=""
	I1210 05:49:09.170435   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:09.170718   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:09.670114   51953 type.go:168] "Request Body" body=""
	I1210 05:49:09.670186   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:09.670501   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:09.670595   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:10.170242   51953 type.go:168] "Request Body" body=""
	I1210 05:49:10.170308   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:10.170650   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:10.670454   51953 type.go:168] "Request Body" body=""
	I1210 05:49:10.670533   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:10.670873   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:11.170681   51953 type.go:168] "Request Body" body=""
	I1210 05:49:11.170756   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:11.171117   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:11.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:49:11.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:11.670185   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:12.169855   51953 type.go:168] "Request Body" body=""
	I1210 05:49:12.169925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:12.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:12.170304   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:12.669856   51953 type.go:168] "Request Body" body=""
	I1210 05:49:12.669928   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:12.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:13.169860   51953 type.go:168] "Request Body" body=""
	I1210 05:49:13.169943   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:13.170217   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:13.380712   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:49:13.443508   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:13.443549   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:13.443568   51953 retry.go:31] will retry after 17.108062036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:13.669825   51953 type.go:168] "Request Body" body=""
	I1210 05:49:13.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:13.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:14.169973   51953 type.go:168] "Request Body" body=""
	I1210 05:49:14.170046   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:14.170360   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:14.170413   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:14.670243   51953 type.go:168] "Request Body" body=""
	I1210 05:49:14.670320   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:14.670588   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:15.170418   51953 type.go:168] "Request Body" body=""
	I1210 05:49:15.170487   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:15.170795   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:15.670586   51953 type.go:168] "Request Body" body=""
	I1210 05:49:15.670658   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:15.670975   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:16.170704   51953 type.go:168] "Request Body" body=""
	I1210 05:49:16.170776   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:16.171067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:16.171120   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:16.669813   51953 type.go:168] "Request Body" body=""
	I1210 05:49:16.669905   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:16.670255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:17.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:49:17.169899   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:17.170218   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:17.669769   51953 type.go:168] "Request Body" body=""
	I1210 05:49:17.669844   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:17.670143   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:18.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:49:18.169934   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:18.170269   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:18.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:49:18.670094   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:18.670415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:18.670472   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:19.170323   51953 type.go:168] "Request Body" body=""
	I1210 05:49:19.170395   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:19.170661   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:19.670601   51953 type.go:168] "Request Body" body=""
	I1210 05:49:19.670672   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:19.670993   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:20.169740   51953 type.go:168] "Request Body" body=""
	I1210 05:49:20.169817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:20.170150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:20.670516   51953 type.go:168] "Request Body" body=""
	I1210 05:49:20.670584   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:20.670897   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:20.670954   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:21.170713   51953 type.go:168] "Request Body" body=""
	I1210 05:49:21.170783   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:21.171082   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:21.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:49:21.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:21.670172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:22.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:49:22.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:22.170106   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:22.669822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:22.669894   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:22.670229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:23.169802   51953 type.go:168] "Request Body" body=""
	I1210 05:49:23.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:23.170200   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:23.170257   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:23.669758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:23.669829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:23.670132   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:24.169822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:24.169902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:24.170262   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:24.670129   51953 type.go:168] "Request Body" body=""
	I1210 05:49:24.670207   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:24.670559   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:25.170449   51953 type.go:168] "Request Body" body=""
	I1210 05:49:25.170521   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:25.170831   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:25.170881   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:25.670585   51953 type.go:168] "Request Body" body=""
	I1210 05:49:25.670666   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:25.671038   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:26.170684   51953 type.go:168] "Request Body" body=""
	I1210 05:49:26.170760   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:26.171104   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:26.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:49:26.669864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:26.670150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:27.169852   51953 type.go:168] "Request Body" body=""
	I1210 05:49:27.169930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:27.170272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:27.669984   51953 type.go:168] "Request Body" body=""
	I1210 05:49:27.670061   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:27.670384   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:27.670440   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:28.169751   51953 type.go:168] "Request Body" body=""
	I1210 05:49:28.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:28.170155   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:28.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:49:28.669874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:28.670210   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:29.170062   51953 type.go:168] "Request Body" body=""
	I1210 05:49:29.170136   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:29.170491   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:29.670199   51953 type.go:168] "Request Body" body=""
	I1210 05:49:29.670274   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:29.670550   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:29.670593   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:30.170374   51953 type.go:168] "Request Body" body=""
	I1210 05:49:30.170446   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:30.170838   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:30.552353   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:49:30.608474   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:30.608517   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:30.608604   51953 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:49:30.670690   51953 type.go:168] "Request Body" body=""
	I1210 05:49:30.670767   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:30.671090   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:31.169783   51953 type.go:168] "Request Body" body=""
	I1210 05:49:31.169860   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:31.170226   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:31.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:31.669889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:31.670241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:32.169940   51953 type.go:168] "Request Body" body=""
	I1210 05:49:32.170013   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:32.170338   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:32.170396   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:32.670045   51953 type.go:168] "Request Body" body=""
	I1210 05:49:32.670119   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:32.670396   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:33.169834   51953 type.go:168] "Request Body" body=""
	I1210 05:49:33.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:33.170309   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:33.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:33.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:33.670201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:34.169903   51953 type.go:168] "Request Body" body=""
	I1210 05:49:34.169973   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:34.170269   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:34.670193   51953 type.go:168] "Request Body" body=""
	I1210 05:49:34.670266   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:34.670601   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:34.670655   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:35.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:49:35.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:35.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:35.669756   51953 type.go:168] "Request Body" body=""
	I1210 05:49:35.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:35.670131   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:36.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:49:36.169901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:36.170223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:36.669946   51953 type.go:168] "Request Body" body=""
	I1210 05:49:36.670020   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:36.670367   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:37.170034   51953 type.go:168] "Request Body" body=""
	I1210 05:49:37.170108   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:37.170407   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:37.170461   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:37.669822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:37.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:37.670249   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:38.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:49:38.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:38.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:38.669935   51953 type.go:168] "Request Body" body=""
	I1210 05:49:38.670003   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:38.670313   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:39.170298   51953 type.go:168] "Request Body" body=""
	I1210 05:49:39.170373   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:39.170717   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:39.170771   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:39.670468   51953 type.go:168] "Request Body" body=""
	I1210 05:49:39.670545   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:39.670883   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:40.170669   51953 type.go:168] "Request Body" body=""
	I1210 05:49:40.170737   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:40.171069   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:40.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:49:40.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:40.670211   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:41.169813   51953 type.go:168] "Request Body" body=""
	I1210 05:49:41.169884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:41.170198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:41.669764   51953 type.go:168] "Request Body" body=""
	I1210 05:49:41.669859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:41.670152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:41.670193   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:42.169845   51953 type.go:168] "Request Body" body=""
	I1210 05:49:42.169948   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:42.170319   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:42.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:49:42.669885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:42.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:43.169758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:43.169831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:43.170096   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:43.669848   51953 type.go:168] "Request Body" body=""
	I1210 05:49:43.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:43.670267   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:43.670317   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:44.169816   51953 type.go:168] "Request Body" body=""
	I1210 05:49:44.169893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:44.170187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:44.670057   51953 type.go:168] "Request Body" body=""
	I1210 05:49:44.670140   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:44.670470   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:44.970959   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:49:45.060109   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:45.064226   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:45.064337   51953 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:49:45.067552   51953 out.go:179] * Enabled addons: 
	I1210 05:49:45.070225   51953 addons.go:530] duration metric: took 1m45.971891823s for enable addons: enabled=[]
	I1210 05:49:45.169999   51953 type.go:168] "Request Body" body=""
	I1210 05:49:45.170150   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:45.171110   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:45.669844   51953 type.go:168] "Request Body" body=""
	I1210 05:49:45.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:45.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:46.169931   51953 type.go:168] "Request Body" body=""
	I1210 05:49:46.170025   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:46.170316   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:46.170369   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:46.670055   51953 type.go:168] "Request Body" body=""
	I1210 05:49:46.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:46.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:47.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:49:47.169900   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:47.170277   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:47.669758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:47.669844   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:47.670170   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:48.169861   51953 type.go:168] "Request Body" body=""
	I1210 05:49:48.169933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:48.170293   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:48.669823   51953 type.go:168] "Request Body" body=""
	I1210 05:49:48.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:48.670185   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:48.670239   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:49.170189   51953 type.go:168] "Request Body" body=""
	I1210 05:49:49.170282   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:49.170581   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:49.670519   51953 type.go:168] "Request Body" body=""
	I1210 05:49:49.670591   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:49.670933   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:50.170751   51953 type.go:168] "Request Body" body=""
	I1210 05:49:50.170838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:50.171163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:50.669768   51953 type.go:168] "Request Body" body=""
	I1210 05:49:50.669842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:50.670163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:51.169874   51953 type.go:168] "Request Body" body=""
	I1210 05:49:51.169945   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:51.170294   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:51.170350   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:51.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:49:51.669925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:51.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:52.169785   51953 type.go:168] "Request Body" body=""
	I1210 05:49:52.169868   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:52.170166   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:52.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:49:52.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:52.670278   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:53.170002   51953 type.go:168] "Request Body" body=""
	I1210 05:49:53.170083   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:53.170428   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:53.170482   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:53.670134   51953 type.go:168] "Request Body" body=""
	I1210 05:49:53.670209   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:53.670537   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:54.170330   51953 type.go:168] "Request Body" body=""
	I1210 05:49:54.170403   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:54.170997   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:54.669762   51953 type.go:168] "Request Body" body=""
	I1210 05:49:54.669840   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:54.670157   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:55.170437   51953 type.go:168] "Request Body" body=""
	I1210 05:49:55.170508   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:55.170825   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:55.170879   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:55.670656   51953 type.go:168] "Request Body" body=""
	I1210 05:49:55.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:55.671067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:56.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:49:56.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:56.170163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:56.670631   51953 type.go:168] "Request Body" body=""
	I1210 05:49:56.670708   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:56.671047   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:57.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:49:57.169826   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:57.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:57.669853   51953 type.go:168] "Request Body" body=""
	I1210 05:49:57.669923   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:57.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:57.670309   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:58.169747   51953 type.go:168] "Request Body" body=""
	I1210 05:49:58.169817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:58.170122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:58.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:49:58.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:58.670275   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:59.170063   51953 type.go:168] "Request Body" body=""
	I1210 05:49:59.170156   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:59.170502   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:59.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:49:59.670792   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:59.671123   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:59.671171   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:00.169945   51953 type.go:168] "Request Body" body=""
	I1210 05:50:00.170054   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:00.170391   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:00.670293   51953 type.go:168] "Request Body" body=""
	I1210 05:50:00.670372   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:00.670734   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:01.170379   51953 type.go:168] "Request Body" body=""
	I1210 05:50:01.170445   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:01.170785   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:01.670657   51953 type.go:168] "Request Body" body=""
	I1210 05:50:01.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:01.671101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:02.169838   51953 type.go:168] "Request Body" body=""
	I1210 05:50:02.169916   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:02.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:02.170292   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:02.670641   51953 type.go:168] "Request Body" body=""
	I1210 05:50:02.670714   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:02.671049   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:03.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:50:03.169821   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:03.170173   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:03.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:03.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:03.670257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:04.169808   51953 type.go:168] "Request Body" body=""
	I1210 05:50:04.169878   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:04.170170   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:04.670153   51953 type.go:168] "Request Body" body=""
	I1210 05:50:04.670227   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:04.670558   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:04.670612   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:05.170389   51953 type.go:168] "Request Body" body=""
	I1210 05:50:05.170463   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:05.170790   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:05.670350   51953 type.go:168] "Request Body" body=""
	I1210 05:50:05.670419   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:05.670674   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:06.170479   51953 type.go:168] "Request Body" body=""
	I1210 05:50:06.170562   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:06.170930   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:06.670726   51953 type.go:168] "Request Body" body=""
	I1210 05:50:06.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:06.671141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:06.671199   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:07.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:50:07.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:07.170225   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:07.669823   51953 type.go:168] "Request Body" body=""
	I1210 05:50:07.669897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:07.670237   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:08.169822   51953 type.go:168] "Request Body" body=""
	I1210 05:50:08.169902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:08.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:08.669915   51953 type.go:168] "Request Body" body=""
	I1210 05:50:08.669997   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:08.670259   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:09.170295   51953 type.go:168] "Request Body" body=""
	I1210 05:50:09.170366   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:09.170686   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:09.170740   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:09.670199   51953 type.go:168] "Request Body" body=""
	I1210 05:50:09.670275   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:09.670611   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:10.170386   51953 type.go:168] "Request Body" body=""
	I1210 05:50:10.170464   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:10.170732   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:10.670493   51953 type.go:168] "Request Body" body=""
	I1210 05:50:10.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:10.670908   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:11.170688   51953 type.go:168] "Request Body" body=""
	I1210 05:50:11.170762   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:11.171109   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:11.171166   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:11.669753   51953 type.go:168] "Request Body" body=""
	I1210 05:50:11.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:11.670111   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:12.169788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:12.169865   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:12.170193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:12.669828   51953 type.go:168] "Request Body" body=""
	I1210 05:50:12.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:12.670272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:13.169792   51953 type.go:168] "Request Body" body=""
	I1210 05:50:13.169860   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:13.170133   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:13.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:50:13.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:13.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:13.670257   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:14.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:50:14.169893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:14.170243   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:14.670026   51953 type.go:168] "Request Body" body=""
	I1210 05:50:14.670100   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:14.670363   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:15.170050   51953 type.go:168] "Request Body" body=""
	I1210 05:50:15.170123   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:15.170471   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:15.670177   51953 type.go:168] "Request Body" body=""
	I1210 05:50:15.670250   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:15.670584   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:15.670636   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:16.170320   51953 type.go:168] "Request Body" body=""
	I1210 05:50:16.170389   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:16.170687   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:16.670498   51953 type.go:168] "Request Body" body=""
	I1210 05:50:16.670574   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:16.670936   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:17.170736   51953 type.go:168] "Request Body" body=""
	I1210 05:50:17.170817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:17.171164   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:17.670572   51953 type.go:168] "Request Body" body=""
	I1210 05:50:17.670637   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:17.670949   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:17.671005   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:18.169725   51953 type.go:168] "Request Body" body=""
	I1210 05:50:18.169795   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:18.170134   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:18.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:50:18.669953   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:18.670308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:19.170037   51953 type.go:168] "Request Body" body=""
	I1210 05:50:19.170104   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:19.170365   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:19.670201   51953 type.go:168] "Request Body" body=""
	I1210 05:50:19.670277   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:19.670610   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:20.170409   51953 type.go:168] "Request Body" body=""
	I1210 05:50:20.170484   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:20.170822   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:20.170877   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:20.670595   51953 type.go:168] "Request Body" body=""
	I1210 05:50:20.670666   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:20.670993   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:21.169731   51953 type.go:168] "Request Body" body=""
	I1210 05:50:21.169813   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:21.170125   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:21.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:21.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:21.670193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:22.170669   51953 type.go:168] "Request Body" body=""
	I1210 05:50:22.170747   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:22.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:22.171080   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:22.669728   51953 type.go:168] "Request Body" body=""
	I1210 05:50:22.669806   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:22.670127   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:23.169851   51953 type.go:168] "Request Body" body=""
	I1210 05:50:23.169933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:23.170302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:23.669972   51953 type.go:168] "Request Body" body=""
	I1210 05:50:23.670044   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:23.670358   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:24.170057   51953 type.go:168] "Request Body" body=""
	I1210 05:50:24.170129   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:24.170453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:24.670197   51953 type.go:168] "Request Body" body=""
	I1210 05:50:24.670272   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:24.670612   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:24.670670   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:25.170337   51953 type.go:168] "Request Body" body=""
	I1210 05:50:25.170410   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:25.170739   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:25.670502   51953 type.go:168] "Request Body" body=""
	I1210 05:50:25.670572   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:25.670902   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:26.170691   51953 type.go:168] "Request Body" body=""
	I1210 05:50:26.170764   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:26.171108   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:26.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:26.669867   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:26.670130   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:27.169817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:27.169897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:27.170246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:27.170300   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:27.669967   51953 type.go:168] "Request Body" body=""
	I1210 05:50:27.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:27.670392   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:28.169739   51953 type.go:168] "Request Body" body=""
	I1210 05:50:28.169822   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:28.170150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:28.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:50:28.669930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:28.670221   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:29.170249   51953 type.go:168] "Request Body" body=""
	I1210 05:50:29.170325   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:29.170644   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:29.170699   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:29.670163   51953 type.go:168] "Request Body" body=""
	I1210 05:50:29.670232   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:29.670555   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:30.170356   51953 type.go:168] "Request Body" body=""
	I1210 05:50:30.170428   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:30.170751   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:30.670538   51953 type.go:168] "Request Body" body=""
	I1210 05:50:30.670611   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:30.670921   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:31.170427   51953 type.go:168] "Request Body" body=""
	I1210 05:50:31.170500   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:31.170752   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:31.170791   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:31.670569   51953 type.go:168] "Request Body" body=""
	I1210 05:50:31.670653   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:31.670969   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:32.169710   51953 type.go:168] "Request Body" body=""
	I1210 05:50:32.169785   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:32.170083   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:32.670743   51953 type.go:168] "Request Body" body=""
	I1210 05:50:32.670820   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:32.671121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:33.169805   51953 type.go:168] "Request Body" body=""
	I1210 05:50:33.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:33.170255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:33.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:50:33.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:33.670229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:33.670285   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:34.169778   51953 type.go:168] "Request Body" body=""
	I1210 05:50:34.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:34.170189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:34.670104   51953 type.go:168] "Request Body" body=""
	I1210 05:50:34.670184   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:34.670511   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:35.169809   51953 type.go:168] "Request Body" body=""
	I1210 05:50:35.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:35.170193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:35.670544   51953 type.go:168] "Request Body" body=""
	I1210 05:50:35.670613   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:35.670878   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:35.670919   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:36.170712   51953 type.go:168] "Request Body" body=""
	I1210 05:50:36.170793   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:36.171084   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:36.669793   51953 type.go:168] "Request Body" body=""
	I1210 05:50:36.669873   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:36.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:37.169942   51953 type.go:168] "Request Body" body=""
	I1210 05:50:37.170016   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:37.170292   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:37.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:50:37.669902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:37.670220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:38.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:50:38.169910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:38.170283   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:38.170338   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:38.669845   51953 type.go:168] "Request Body" body=""
	I1210 05:50:38.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:38.670182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:39.170144   51953 type.go:168] "Request Body" body=""
	I1210 05:50:39.170220   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:39.170549   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:39.670142   51953 type.go:168] "Request Body" body=""
	I1210 05:50:39.670218   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:39.670527   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:40.170193   51953 type.go:168] "Request Body" body=""
	I1210 05:50:40.170274   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:40.170539   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:40.170603   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:40.670363   51953 type.go:168] "Request Body" body=""
	I1210 05:50:40.670438   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:40.670794   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:41.170587   51953 type.go:168] "Request Body" body=""
	I1210 05:50:41.170671   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:41.171005   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:41.669712   51953 type.go:168] "Request Body" body=""
	I1210 05:50:41.669800   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:41.670128   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:42.169860   51953 type.go:168] "Request Body" body=""
	I1210 05:50:42.169951   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:42.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:42.669840   51953 type.go:168] "Request Body" body=""
	I1210 05:50:42.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:42.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:42.670293   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:43.169767   51953 type.go:168] "Request Body" body=""
	I1210 05:50:43.169833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:43.170101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:43.669850   51953 type.go:168] "Request Body" body=""
	I1210 05:50:43.669921   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:43.670246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:44.169977   51953 type.go:168] "Request Body" body=""
	I1210 05:50:44.170071   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:44.170414   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:44.670140   51953 type.go:168] "Request Body" body=""
	I1210 05:50:44.670226   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:44.670613   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:44.670677   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:45.170475   51953 type.go:168] "Request Body" body=""
	I1210 05:50:45.170563   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:45.170891   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:45.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:50:45.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:45.670222   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:46.169767   51953 type.go:168] "Request Body" body=""
	I1210 05:50:46.169838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:46.170104   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:46.669827   51953 type.go:168] "Request Body" body=""
	I1210 05:50:46.669903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:46.670226   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:47.169875   51953 type.go:168] "Request Body" body=""
	I1210 05:50:47.169958   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:47.170385   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:47.170442   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:47.670688   51953 type.go:168] "Request Body" body=""
	I1210 05:50:47.670757   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:47.671081   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:48.169796   51953 type.go:168] "Request Body" body=""
	I1210 05:50:48.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:48.170206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:48.669926   51953 type.go:168] "Request Body" body=""
	I1210 05:50:48.670000   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:48.670320   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:49.170308   51953 type.go:168] "Request Body" body=""
	I1210 05:50:49.170376   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:49.170645   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:49.170686   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:49.670650   51953 type.go:168] "Request Body" body=""
	I1210 05:50:49.670726   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:49.671070   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:50.170763   51953 type.go:168] "Request Body" body=""
	I1210 05:50:50.170849   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:50.171249   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:50.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:50.669858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:50.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:51.169892   51953 type.go:168] "Request Body" body=""
	I1210 05:50:51.169972   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:51.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:51.670028   51953 type.go:168] "Request Body" body=""
	I1210 05:50:51.670106   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:51.670436   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:51.670489   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:52.170129   51953 type.go:168] "Request Body" body=""
	I1210 05:50:52.170197   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:52.170458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:52.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:50:52.669892   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:52.670248   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:53.169982   51953 type.go:168] "Request Body" body=""
	I1210 05:50:53.170060   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:53.170464   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:53.669749   51953 type.go:168] "Request Body" body=""
	I1210 05:50:53.669818   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:53.670085   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:54.169781   51953 type.go:168] "Request Body" body=""
	I1210 05:50:54.169858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:54.170182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:54.170236   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:54.669996   51953 type.go:168] "Request Body" body=""
	I1210 05:50:54.670086   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:54.670418   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:55.170104   51953 type.go:168] "Request Body" body=""
	I1210 05:50:55.170177   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:55.170449   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:55.669843   51953 type.go:168] "Request Body" body=""
	I1210 05:50:55.669923   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:55.670268   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:56.169975   51953 type.go:168] "Request Body" body=""
	I1210 05:50:56.170051   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:56.170388   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:56.170441   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:56.670095   51953 type.go:168] "Request Body" body=""
	I1210 05:50:56.670168   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:56.670482   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:57.169890   51953 type.go:168] "Request Body" body=""
	I1210 05:50:57.169984   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:57.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:57.669805   51953 type.go:168] "Request Body" body=""
	I1210 05:50:57.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:57.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:58.169774   51953 type.go:168] "Request Body" body=""
	I1210 05:50:58.169842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:58.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:58.669884   51953 type.go:168] "Request Body" body=""
	I1210 05:50:58.669959   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:58.670302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:58.670355   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:59.170179   51953 type.go:168] "Request Body" body=""
	I1210 05:50:59.170260   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:59.170621   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:59.670326   51953 type.go:168] "Request Body" body=""
	I1210 05:50:59.670400   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:59.670713   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:00.170597   51953 type.go:168] "Request Body" body=""
	I1210 05:51:00.170676   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:00.171062   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:00.670393   51953 type.go:168] "Request Body" body=""
	I1210 05:51:00.670470   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:00.670779   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:00.670859   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:01.170104   51953 type.go:168] "Request Body" body=""
	I1210 05:51:01.170176   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:01.170534   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:01.670409   51953 type.go:168] "Request Body" body=""
	I1210 05:51:01.670489   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:01.670793   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:02.170600   51953 type.go:168] "Request Body" body=""
	I1210 05:51:02.170681   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:02.171034   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:02.669707   51953 type.go:168] "Request Body" body=""
	I1210 05:51:02.669777   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:02.670050   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:03.169821   51953 type.go:168] "Request Body" body=""
	I1210 05:51:03.169928   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:03.170262   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:03.170336   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:03.669876   51953 type.go:168] "Request Body" body=""
	I1210 05:51:03.669952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:03.670257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:04.169761   51953 type.go:168] "Request Body" body=""
	I1210 05:51:04.169839   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:04.170116   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:04.670079   51953 type.go:168] "Request Body" body=""
	I1210 05:51:04.670153   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:04.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:05.170179   51953 type.go:168] "Request Body" body=""
	I1210 05:51:05.170252   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:05.170541   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:05.170585   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:05.670315   51953 type.go:168] "Request Body" body=""
	I1210 05:51:05.670404   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:05.670663   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:06.170465   51953 type.go:168] "Request Body" body=""
	I1210 05:51:06.170568   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:06.170894   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:06.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:51:06.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:06.671114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:07.169747   51953 type.go:168] "Request Body" body=""
	I1210 05:51:07.169823   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:07.170100   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:07.669844   51953 type.go:168] "Request Body" body=""
	I1210 05:51:07.669918   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:07.670193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:07.670235   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:08.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:51:08.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:08.170261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:08.669794   51953 type.go:168] "Request Body" body=""
	I1210 05:51:08.669861   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:08.670162   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:09.170071   51953 type.go:168] "Request Body" body=""
	I1210 05:51:09.170149   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:09.170495   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:09.670204   51953 type.go:168] "Request Body" body=""
	I1210 05:51:09.670276   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:09.670598   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:09.670653   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:10.169774   51953 type.go:168] "Request Body" body=""
	I1210 05:51:10.169872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:10.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:10.669859   51953 type.go:168] "Request Body" body=""
	I1210 05:51:10.669948   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:10.670307   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:11.169790   51953 type.go:168] "Request Body" body=""
	I1210 05:51:11.169861   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:11.170177   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:11.669711   51953 type.go:168] "Request Body" body=""
	I1210 05:51:11.669789   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:11.670102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:12.169688   51953 type.go:168] "Request Body" body=""
	I1210 05:51:12.169769   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:12.170083   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:12.170143   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:12.669821   51953 type.go:168] "Request Body" body=""
	I1210 05:51:12.669925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:12.670244   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:13.170587   51953 type.go:168] "Request Body" body=""
	I1210 05:51:13.170667   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:13.170931   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:13.670750   51953 type.go:168] "Request Body" body=""
	I1210 05:51:13.670830   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:13.671209   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:14.169828   51953 type.go:168] "Request Body" body=""
	I1210 05:51:14.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:14.170243   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:14.170295   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:14.670005   51953 type.go:168] "Request Body" body=""
	I1210 05:51:14.670076   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:14.670345   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:15.170019   51953 type.go:168] "Request Body" body=""
	I1210 05:51:15.170092   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:15.170405   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:15.670090   51953 type.go:168] "Request Body" body=""
	I1210 05:51:15.670164   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:15.670488   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:16.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:51:16.169830   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:16.170154   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:16.669884   51953 type.go:168] "Request Body" body=""
	I1210 05:51:16.669954   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:16.670321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:16.670378   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:17.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:51:17.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:17.170203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:17.669857   51953 type.go:168] "Request Body" body=""
	I1210 05:51:17.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:17.670182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:18.169839   51953 type.go:168] "Request Body" body=""
	I1210 05:51:18.169915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:18.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:18.669814   51953 type.go:168] "Request Body" body=""
	I1210 05:51:18.669894   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:18.670212   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:19.170018   51953 type.go:168] "Request Body" body=""
	I1210 05:51:19.170103   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:19.170385   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:19.170435   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:19.670213   51953 type.go:168] "Request Body" body=""
	I1210 05:51:19.670290   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:19.670634   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:20.170418   51953 type.go:168] "Request Body" body=""
	I1210 05:51:20.170505   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:20.170868   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:20.670496   51953 type.go:168] "Request Body" body=""
	I1210 05:51:20.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:20.670919   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:21.170726   51953 type.go:168] "Request Body" body=""
	I1210 05:51:21.170805   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:21.171135   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:21.171197   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:21.669812   51953 type.go:168] "Request Body" body=""
	I1210 05:51:21.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:21.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:22.169804   51953 type.go:168] "Request Body" body=""
	I1210 05:51:22.169880   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:22.170180   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:22.669830   51953 type.go:168] "Request Body" body=""
	I1210 05:51:22.669902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:22.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:23.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:51:23.169901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:23.170299   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:23.669871   51953 type.go:168] "Request Body" body=""
	I1210 05:51:23.669940   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:23.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:23.670238   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:24.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:51:24.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:24.170239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:24.670061   51953 type.go:168] "Request Body" body=""
	I1210 05:51:24.670134   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:24.670493   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:25.169972   51953 type.go:168] "Request Body" body=""
	I1210 05:51:25.170044   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:25.170325   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:25.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:51:25.669907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:25.670245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:25.670298   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:26.170334   51953 type.go:168] "Request Body" body=""
	I1210 05:51:26.170405   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:26.170720   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:26.670508   51953 type.go:168] "Request Body" body=""
	I1210 05:51:26.670574   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:26.670837   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:27.170610   51953 type.go:168] "Request Body" body=""
	I1210 05:51:27.170687   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:27.171034   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:27.670641   51953 type.go:168] "Request Body" body=""
	I1210 05:51:27.670716   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:27.671052   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:27.671107   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:28.170507   51953 type.go:168] "Request Body" body=""
	I1210 05:51:28.170587   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:28.170917   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:28.670721   51953 type.go:168] "Request Body" body=""
	I1210 05:51:28.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:28.671160   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:29.170214   51953 type.go:168] "Request Body" body=""
	I1210 05:51:29.170299   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:29.170609   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:29.670390   51953 type.go:168] "Request Body" body=""
	I1210 05:51:29.670459   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:29.670758   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:30.170562   51953 type.go:168] "Request Body" body=""
	I1210 05:51:30.170660   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:30.171075   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:30.171137   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:30.670749   51953 type.go:168] "Request Body" body=""
	I1210 05:51:30.670827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:30.671196   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:31.169761   51953 type.go:168] "Request Body" body=""
	I1210 05:51:31.169835   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:31.170191   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:31.669883   51953 type.go:168] "Request Body" body=""
	I1210 05:51:31.669961   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:31.670300   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:32.169880   51953 type.go:168] "Request Body" body=""
	I1210 05:51:32.169952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:32.170273   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:32.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:51:32.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:32.670089   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:32.670131   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:33.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:51:33.169851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:33.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:33.669798   51953 type.go:168] "Request Body" body=""
	I1210 05:51:33.669868   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:33.670181   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:34.169753   51953 type.go:168] "Request Body" body=""
	I1210 05:51:34.169825   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:34.170091   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:34.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:51:34.669907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:34.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:34.670277   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:35.169820   51953 type.go:168] "Request Body" body=""
	I1210 05:51:35.169944   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:35.170278   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:35.669951   51953 type.go:168] "Request Body" body=""
	I1210 05:51:35.670023   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:35.670279   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:36.169953   51953 type.go:168] "Request Body" body=""
	I1210 05:51:36.170025   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:36.170358   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:36.670063   51953 type.go:168] "Request Body" body=""
	I1210 05:51:36.670133   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:36.670464   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:36.670516   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:37.170022   51953 type.go:168] "Request Body" body=""
	I1210 05:51:37.170090   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:37.170409   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:37.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:51:37.669901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:37.670272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:38.169963   51953 type.go:168] "Request Body" body=""
	I1210 05:51:38.170040   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:38.170377   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:38.669747   51953 type.go:168] "Request Body" body=""
	I1210 05:51:38.669816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:38.670139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:39.170160   51953 type.go:168] "Request Body" body=""
	I1210 05:51:39.170230   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:39.170516   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:39.170556   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:39.670447   51953 type.go:168] "Request Body" body=""
	I1210 05:51:39.670519   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:39.670875   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:40.170718   51953 type.go:168] "Request Body" body=""
	I1210 05:51:40.170785   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:40.171078   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:40.670706   51953 type.go:168] "Request Body" body=""
	I1210 05:51:40.670781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:40.671102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:41.169776   51953 type.go:168] "Request Body" body=""
	I1210 05:51:41.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:41.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:41.670004   51953 type.go:168] "Request Body" body=""
	I1210 05:51:41.670161   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:41.670773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:41.670875   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:42.170771   51953 type.go:168] "Request Body" body=""
	I1210 05:51:42.170847   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:42.171213   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:42.669910   51953 type.go:168] "Request Body" body=""
	I1210 05:51:42.669982   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:42.670284   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:43.169986   51953 type.go:168] "Request Body" body=""
	I1210 05:51:43.170065   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:43.170327   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:43.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:51:43.669901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:43.670214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:44.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:51:44.169896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:44.170247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:44.170307   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:44.670143   51953 type.go:168] "Request Body" body=""
	I1210 05:51:44.670220   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:44.670489   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:45.170378   51953 type.go:168] "Request Body" body=""
	I1210 05:51:45.170522   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:45.171164   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:45.669904   51953 type.go:168] "Request Body" body=""
	I1210 05:51:45.669977   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:45.670266   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:46.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:51:46.169981   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:46.170247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:46.669982   51953 type.go:168] "Request Body" body=""
	I1210 05:51:46.670065   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:46.670412   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:46.670463   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:47.170121   51953 type.go:168] "Request Body" body=""
	I1210 05:51:47.170196   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:47.170526   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:47.670279   51953 type.go:168] "Request Body" body=""
	I1210 05:51:47.670353   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:47.670622   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:48.170397   51953 type.go:168] "Request Body" body=""
	I1210 05:51:48.170475   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:48.170792   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:48.670571   51953 type.go:168] "Request Body" body=""
	I1210 05:51:48.670649   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:48.670997   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:48.671073   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:49.170679   51953 type.go:168] "Request Body" body=""
	I1210 05:51:49.170753   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:49.171102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:49.670157   51953 type.go:168] "Request Body" body=""
	I1210 05:51:49.670228   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:49.670552   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:50.170360   51953 type.go:168] "Request Body" body=""
	I1210 05:51:50.170435   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:50.170752   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:50.670554   51953 type.go:168] "Request Body" body=""
	I1210 05:51:50.670636   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:50.670942   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:51.170729   51953 type.go:168] "Request Body" body=""
	I1210 05:51:51.170802   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:51.171139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:51.171187   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:51.669733   51953 type.go:168] "Request Body" body=""
	I1210 05:51:51.669807   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:51.670146   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:52.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:51:52.169929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:52.170284   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:52.669819   51953 type.go:168] "Request Body" body=""
	I1210 05:51:52.669893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:52.670207   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:53.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:51:53.169992   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:53.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:53.669991   51953 type.go:168] "Request Body" body=""
	I1210 05:51:53.670070   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:53.670340   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:53.670380   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:54.170031   51953 type.go:168] "Request Body" body=""
	I1210 05:51:54.170110   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:54.170441   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:54.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:51:54.670202   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:54.670529   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:55.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:51:55.169832   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:55.170177   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:55.669779   51953 type.go:168] "Request Body" body=""
	I1210 05:51:55.669851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:55.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:56.169901   51953 type.go:168] "Request Body" body=""
	I1210 05:51:56.169974   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:56.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:56.170373   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:56.669742   51953 type.go:168] "Request Body" body=""
	I1210 05:51:56.669816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:56.670103   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:57.169781   51953 type.go:168] "Request Body" body=""
	I1210 05:51:57.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:57.170181   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:57.669889   51953 type.go:168] "Request Body" body=""
	I1210 05:51:57.669965   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:57.670291   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:58.170689   51953 type.go:168] "Request Body" body=""
	I1210 05:51:58.170758   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:58.171080   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:58.171123   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:58.669796   51953 type.go:168] "Request Body" body=""
	I1210 05:51:58.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:58.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:59.170073   51953 type.go:168] "Request Body" body=""
	I1210 05:51:59.170150   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:59.170489   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:59.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:51:59.670202   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:59.670565   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:00.170445   51953 type.go:168] "Request Body" body=""
	I1210 05:52:00.170546   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:00.170880   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:00.669804   51953 type.go:168] "Request Body" body=""
	I1210 05:52:00.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:00.670208   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:00.670259   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:01.169772   51953 type.go:168] "Request Body" body=""
	I1210 05:52:01.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:01.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:01.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:52:01.670097   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:01.670433   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:02.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:52:02.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:02.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:02.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:52:02.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:02.670276   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:02.670355   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:03.170035   51953 type.go:168] "Request Body" body=""
	I1210 05:52:03.170108   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:03.170401   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:03.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:52:03.669890   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:03.670202   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:04.169758   51953 type.go:168] "Request Body" body=""
	I1210 05:52:04.169836   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:04.170117   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:04.670079   51953 type.go:168] "Request Body" body=""
	I1210 05:52:04.670164   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:04.670516   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:04.670563   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:05.169839   51953 type.go:168] "Request Body" body=""
	I1210 05:52:05.169935   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:05.170260   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:05.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:52:05.669823   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:05.670097   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:06.169800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:06.169870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:06.170195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:06.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:52:06.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:06.670297   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:07.169768   51953 type.go:168] "Request Body" body=""
	I1210 05:52:07.169841   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:07.170149   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:07.170196   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:07.669845   51953 type.go:168] "Request Body" body=""
	I1210 05:52:07.669915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:07.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:08.169973   51953 type.go:168] "Request Body" body=""
	I1210 05:52:08.170047   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:08.170399   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:08.670082   51953 type.go:168] "Request Body" body=""
	I1210 05:52:08.670165   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:08.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:09.170372   51953 type.go:168] "Request Body" body=""
	I1210 05:52:09.170444   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:09.170740   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:09.170790   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:09.670553   51953 type.go:168] "Request Body" body=""
	I1210 05:52:09.670631   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:09.670948   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:10.170667   51953 type.go:168] "Request Body" body=""
	I1210 05:52:10.170738   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:10.170996   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:10.669729   51953 type.go:168] "Request Body" body=""
	I1210 05:52:10.669805   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:10.670126   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:11.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:52:11.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:11.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:11.669912   51953 type.go:168] "Request Body" body=""
	I1210 05:52:11.669979   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:11.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:11.670280   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:12.169939   51953 type.go:168] "Request Body" body=""
	I1210 05:52:12.170014   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:12.170362   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:12.670078   51953 type.go:168] "Request Body" body=""
	I1210 05:52:12.670162   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:12.670514   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:13.169756   51953 type.go:168] "Request Body" body=""
	I1210 05:52:13.169834   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:13.170093   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:13.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:52:13.669896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:13.670227   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:14.169820   51953 type.go:168] "Request Body" body=""
	I1210 05:52:14.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:14.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:14.170294   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:14.670030   51953 type.go:168] "Request Body" body=""
	I1210 05:52:14.670095   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:14.670375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:15.170120   51953 type.go:168] "Request Body" body=""
	I1210 05:52:15.170196   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:15.170539   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:15.670302   51953 type.go:168] "Request Body" body=""
	I1210 05:52:15.670373   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:15.670676   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:16.170432   51953 type.go:168] "Request Body" body=""
	I1210 05:52:16.170507   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:16.170803   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:16.170857   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:16.670503   51953 type.go:168] "Request Body" body=""
	I1210 05:52:16.670576   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:16.670887   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:17.170709   51953 type.go:168] "Request Body" body=""
	I1210 05:52:17.170781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:17.171089   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:17.669750   51953 type.go:168] "Request Body" body=""
	I1210 05:52:17.669817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:17.670129   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:18.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:52:18.169909   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:18.170246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:18.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:52:18.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:18.670224   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:18.670276   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:19.170163   51953 type.go:168] "Request Body" body=""
	I1210 05:52:19.170242   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:19.170554   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:19.670487   51953 type.go:168] "Request Body" body=""
	I1210 05:52:19.670569   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:19.670973   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:20.169737   51953 type.go:168] "Request Body" body=""
	I1210 05:52:20.169824   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:20.170206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:20.669861   51953 type.go:168] "Request Body" body=""
	I1210 05:52:20.669938   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:20.670209   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:21.169833   51953 type.go:168] "Request Body" body=""
	I1210 05:52:21.169904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:21.170238   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:21.170290   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:21.669832   51953 type.go:168] "Request Body" body=""
	I1210 05:52:21.669911   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:21.670234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:22.169913   51953 type.go:168] "Request Body" body=""
	I1210 05:52:22.169983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:22.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:22.669812   51953 type.go:168] "Request Body" body=""
	I1210 05:52:22.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:22.670179   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:23.169847   51953 type.go:168] "Request Body" body=""
	I1210 05:52:23.169930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:23.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:23.669962   51953 type.go:168] "Request Body" body=""
	I1210 05:52:23.670037   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:23.670326   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:23.670367   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:24.170037   51953 type.go:168] "Request Body" body=""
	I1210 05:52:24.170109   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:24.170439   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:24.670168   51953 type.go:168] "Request Body" body=""
	I1210 05:52:24.670241   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:24.670573   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:25.170350   51953 type.go:168] "Request Body" body=""
	I1210 05:52:25.170421   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:25.170687   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:25.670431   51953 type.go:168] "Request Body" body=""
	I1210 05:52:25.670504   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:25.670821   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:25.670873   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:26.170481   51953 type.go:168] "Request Body" body=""
	I1210 05:52:26.170555   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:26.170912   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:26.670658   51953 type.go:168] "Request Body" body=""
	I1210 05:52:26.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:26.670998   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:27.169719   51953 type.go:168] "Request Body" body=""
	I1210 05:52:27.169797   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:27.170122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:27.669792   51953 type.go:168] "Request Body" body=""
	I1210 05:52:27.669870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:27.670184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:28.169778   51953 type.go:168] "Request Body" body=""
	I1210 05:52:28.169852   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:28.170172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:28.170229   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:28.669766   51953 type.go:168] "Request Body" body=""
	I1210 05:52:28.669838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:28.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:29.170045   51953 type.go:168] "Request Body" body=""
	I1210 05:52:29.170125   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:29.170415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:29.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:52:29.670193   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:29.670453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:30.170123   51953 type.go:168] "Request Body" body=""
	I1210 05:52:30.170199   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:30.170559   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:30.170635   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:30.670127   51953 type.go:168] "Request Body" body=""
	I1210 05:52:30.670200   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:30.670509   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:31.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:52:31.169839   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:31.170095   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:31.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:31.669875   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:31.670200   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:32.169800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:32.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:32.170198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:32.669769   51953 type.go:168] "Request Body" body=""
	I1210 05:52:32.669851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:32.670162   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:32.670212   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:33.169842   51953 type.go:168] "Request Body" body=""
	I1210 05:52:33.169915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:33.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:33.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:52:33.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:33.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:34.169925   51953 type.go:168] "Request Body" body=""
	I1210 05:52:34.170009   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:34.170331   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:34.670116   51953 type.go:168] "Request Body" body=""
	I1210 05:52:34.670194   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:34.670515   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:34.670571   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:35.170367   51953 type.go:168] "Request Body" body=""
	I1210 05:52:35.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:35.170782   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:35.670577   51953 type.go:168] "Request Body" body=""
	I1210 05:52:35.670647   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:35.670912   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:36.170722   51953 type.go:168] "Request Body" body=""
	I1210 05:52:36.170802   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:36.171183   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:36.669843   51953 type.go:168] "Request Body" body=""
	I1210 05:52:36.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:36.670291   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:37.170702   51953 type.go:168] "Request Body" body=""
	I1210 05:52:37.170771   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:37.171105   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:37.171165   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:37.669824   51953 type.go:168] "Request Body" body=""
	I1210 05:52:37.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:37.670242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:38.169838   51953 type.go:168] "Request Body" body=""
	I1210 05:52:38.169909   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:38.170276   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:38.669712   51953 type.go:168] "Request Body" body=""
	I1210 05:52:38.669779   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:38.670087   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:39.169961   51953 type.go:168] "Request Body" body=""
	I1210 05:52:39.170037   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:39.170366   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:39.670236   51953 type.go:168] "Request Body" body=""
	I1210 05:52:39.670306   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:39.670633   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:39.670687   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:40.170413   51953 type.go:168] "Request Body" body=""
	I1210 05:52:40.170482   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:40.170769   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:40.670553   51953 type.go:168] "Request Body" body=""
	I1210 05:52:40.670623   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:40.670995   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:41.169710   51953 type.go:168] "Request Body" body=""
	I1210 05:52:41.169781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:41.170119   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:41.669770   51953 type.go:168] "Request Body" body=""
	I1210 05:52:41.669843   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:41.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:42.169882   51953 type.go:168] "Request Body" body=""
	I1210 05:52:42.169973   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:42.170386   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:42.170462   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:42.670156   51953 type.go:168] "Request Body" body=""
	I1210 05:52:42.670236   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:42.670580   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:43.170323   51953 type.go:168] "Request Body" body=""
	I1210 05:52:43.170400   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:43.170660   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:43.670466   51953 type.go:168] "Request Body" body=""
	I1210 05:52:43.670547   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:43.670868   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:44.170674   51953 type.go:168] "Request Body" body=""
	I1210 05:52:44.170752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:44.171091   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:44.171150   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:44.670034   51953 type.go:168] "Request Body" body=""
	I1210 05:52:44.670107   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:44.670403   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:45.170340   51953 type.go:168] "Request Body" body=""
	I1210 05:52:45.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:45.170941   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:45.670006   51953 type.go:168] "Request Body" body=""
	I1210 05:52:45.670100   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:45.670436   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:46.169753   51953 type.go:168] "Request Body" body=""
	I1210 05:52:46.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:46.170099   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:46.669811   51953 type.go:168] "Request Body" body=""
	I1210 05:52:46.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:46.670235   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:46.670295   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:47.169851   51953 type.go:168] "Request Body" body=""
	I1210 05:52:47.169946   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:47.170312   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:47.669774   51953 type.go:168] "Request Body" body=""
	I1210 05:52:47.669840   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:47.670114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:48.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:52:48.169897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:48.170265   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:48.669972   51953 type.go:168] "Request Body" body=""
	I1210 05:52:48.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:48.670364   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:48.670428   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:49.170294   51953 type.go:168] "Request Body" body=""
	I1210 05:52:49.170370   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:49.170634   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:49.670676   51953 type.go:168] "Request Body" body=""
	I1210 05:52:49.670754   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:49.671078   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:50.169788   51953 type.go:168] "Request Body" body=""
	I1210 05:52:50.169866   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:50.170203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:50.669782   51953 type.go:168] "Request Body" body=""
	I1210 05:52:50.669858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:50.670155   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:51.169834   51953 type.go:168] "Request Body" body=""
	I1210 05:52:51.169907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:51.170263   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:51.170321   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:51.670020   51953 type.go:168] "Request Body" body=""
	I1210 05:52:51.670091   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:51.670371   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:52.170061   51953 type.go:168] "Request Body" body=""
	I1210 05:52:52.170132   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:52.170447   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:52.669850   51953 type.go:168] "Request Body" body=""
	I1210 05:52:52.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:52.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:53.169932   51953 type.go:168] "Request Body" body=""
	I1210 05:52:53.170009   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:53.170341   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:53.170397   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:53.670032   51953 type.go:168] "Request Body" body=""
	I1210 05:52:53.670105   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:53.670414   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:54.169881   51953 type.go:168] "Request Body" body=""
	I1210 05:52:54.169952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:54.170302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:54.670115   51953 type.go:168] "Request Body" body=""
	I1210 05:52:54.670186   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:54.670529   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:55.170256   51953 type.go:168] "Request Body" body=""
	I1210 05:52:55.170339   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:55.170657   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:55.170714   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:55.670524   51953 type.go:168] "Request Body" body=""
	I1210 05:52:55.670595   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:55.670950   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:56.170826   51953 type.go:168] "Request Body" body=""
	I1210 05:52:56.170903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:56.171240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:56.669759   51953 type.go:168] "Request Body" body=""
	I1210 05:52:56.669835   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:56.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:57.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:52:57.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:57.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:57.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:52:57.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:57.670255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:57.670314   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:58.170422   51953 type.go:168] "Request Body" body=""
	I1210 05:52:58.170495   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:58.170808   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:58.670565   51953 type.go:168] "Request Body" body=""
	I1210 05:52:58.670637   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:58.670958   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:59.170654   51953 type.go:168] "Request Body" body=""
	I1210 05:52:59.170728   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:59.171071   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:59.669939   51953 type.go:168] "Request Body" body=""
	I1210 05:52:59.670007   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:59.670301   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:59.670342   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:00.169895   51953 type.go:168] "Request Body" body=""
	I1210 05:53:00.169990   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:00.170313   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:00.669978   51953 type.go:168] "Request Body" body=""
	I1210 05:53:00.670066   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:00.670428   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:01.169993   51953 type.go:168] "Request Body" body=""
	I1210 05:53:01.170074   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:01.170371   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:01.669848   51953 type.go:168] "Request Body" body=""
	I1210 05:53:01.669929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:01.670400   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:01.670459   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:02.169969   51953 type.go:168] "Request Body" body=""
	I1210 05:53:02.170067   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:02.170453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:02.670163   51953 type.go:168] "Request Body" body=""
	I1210 05:53:02.670231   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:02.670544   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:03.170337   51953 type.go:168] "Request Body" body=""
	I1210 05:53:03.170414   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:03.170717   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:03.670398   51953 type.go:168] "Request Body" body=""
	I1210 05:53:03.670477   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:03.670784   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:03.670830   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:04.170568   51953 type.go:168] "Request Body" body=""
	I1210 05:53:04.170643   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:04.170962   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:04.669739   51953 type.go:168] "Request Body" body=""
	I1210 05:53:04.669820   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:04.670122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:05.169736   51953 type.go:168] "Request Body" body=""
	I1210 05:53:05.169812   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:05.170142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:05.669731   51953 type.go:168] "Request Body" body=""
	I1210 05:53:05.669801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:05.670088   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:06.169847   51953 type.go:168] "Request Body" body=""
	I1210 05:53:06.169919   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:06.170255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:06.170309   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:06.669839   51953 type.go:168] "Request Body" body=""
	I1210 05:53:06.669932   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:06.670258   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:07.169929   51953 type.go:168] "Request Body" body=""
	I1210 05:53:07.170019   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:07.170306   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:07.670053   51953 type.go:168] "Request Body" body=""
	I1210 05:53:07.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:07.670495   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:08.170204   51953 type.go:168] "Request Body" body=""
	I1210 05:53:08.170277   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:08.170612   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:08.170669   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:08.670407   51953 type.go:168] "Request Body" body=""
	I1210 05:53:08.670480   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:08.670802   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:09.170597   51953 type.go:168] "Request Body" body=""
	I1210 05:53:09.170672   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:09.171003   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:09.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:53:09.669899   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:09.670277   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:10.169910   51953 type.go:168] "Request Body" body=""
	I1210 05:53:10.169986   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:10.170270   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:10.669995   51953 type.go:168] "Request Body" body=""
	I1210 05:53:10.670072   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:10.670375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:10.670419   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:11.169900   51953 type.go:168] "Request Body" body=""
	I1210 05:53:11.169978   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:11.170309   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:11.669761   51953 type.go:168] "Request Body" body=""
	I1210 05:53:11.669842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:11.670160   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:12.169855   51953 type.go:168] "Request Body" body=""
	I1210 05:53:12.169929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:12.170311   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:12.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:53:12.669877   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:12.670203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:13.169791   51953 type.go:168] "Request Body" body=""
	I1210 05:53:13.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:13.170213   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:13.170268   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:13.669861   51953 type.go:168] "Request Body" body=""
	I1210 05:53:13.669935   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:13.670288   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:14.169996   51953 type.go:168] "Request Body" body=""
	I1210 05:53:14.170071   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:14.170413   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:14.670109   51953 type.go:168] "Request Body" body=""
	I1210 05:53:14.670182   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:14.670458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:15.169830   51953 type.go:168] "Request Body" body=""
	I1210 05:53:15.169954   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:15.170288   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:15.170343   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:15.669901   51953 type.go:168] "Request Body" body=""
	I1210 05:53:15.669977   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:15.670322   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:16.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:53:16.169825   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:16.170141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:16.669852   51953 type.go:168] "Request Body" body=""
	I1210 05:53:16.669933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:16.670259   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:17.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:53:17.169896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:17.170232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:17.669789   51953 type.go:168] "Request Body" body=""
	I1210 05:53:17.669855   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:17.670121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:17.670179   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:18.169880   51953 type.go:168] "Request Body" body=""
	I1210 05:53:18.169953   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:18.170308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:18.670029   51953 type.go:168] "Request Body" body=""
	I1210 05:53:18.670125   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:18.670459   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:19.170387   51953 type.go:168] "Request Body" body=""
	I1210 05:53:19.170452   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:19.170715   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:19.670679   51953 type.go:168] "Request Body" body=""
	I1210 05:53:19.670747   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:19.671074   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:19.671133   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:20.169842   51953 type.go:168] "Request Body" body=""
	I1210 05:53:20.169925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:20.170257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:20.669763   51953 type.go:168] "Request Body" body=""
	I1210 05:53:20.669856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:20.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:21.169837   51953 type.go:168] "Request Body" body=""
	I1210 05:53:21.169912   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:21.170234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:21.669987   51953 type.go:168] "Request Body" body=""
	I1210 05:53:21.670060   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:21.670390   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:22.170082   51953 type.go:168] "Request Body" body=""
	I1210 05:53:22.170158   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:22.170445   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:22.170499   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:22.669862   51953 type.go:168] "Request Body" body=""
	I1210 05:53:22.669943   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:22.670295   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:23.169959   51953 type.go:168] "Request Body" body=""
	I1210 05:53:23.170036   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:23.170370   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:23.669779   51953 type.go:168] "Request Body" body=""
	I1210 05:53:23.669847   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:23.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:24.169812   51953 type.go:168] "Request Body" body=""
	I1210 05:53:24.169882   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:24.170274   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:24.670128   51953 type.go:168] "Request Body" body=""
	I1210 05:53:24.670208   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:24.670549   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:24.670605   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:25.170307   51953 type.go:168] "Request Body" body=""
	I1210 05:53:25.170382   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:25.170719   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:25.670478   51953 type.go:168] "Request Body" body=""
	I1210 05:53:25.670547   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:25.670828   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:26.170599   51953 type.go:168] "Request Body" body=""
	I1210 05:53:26.170671   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:26.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:26.669709   51953 type.go:168] "Request Body" body=""
	I1210 05:53:26.669782   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:26.670054   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:27.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:53:27.169831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:27.170139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:27.170198   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:27.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:53:27.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:27.670219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:28.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:53:28.169828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:28.170132   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:28.669819   51953 type.go:168] "Request Body" body=""
	I1210 05:53:28.669888   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:28.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:29.170163   51953 type.go:168] "Request Body" body=""
	I1210 05:53:29.170243   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:29.170572   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:29.170631   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:29.670270   51953 type.go:168] "Request Body" body=""
	I1210 05:53:29.670337   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:29.670607   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:30.170496   51953 type.go:168] "Request Body" body=""
	I1210 05:53:30.170584   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:30.170947   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:30.670768   51953 type.go:168] "Request Body" body=""
	I1210 05:53:30.670854   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:30.671206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:31.169798   51953 type.go:168] "Request Body" body=""
	I1210 05:53:31.169872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:31.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:31.669871   51953 type.go:168] "Request Body" body=""
	I1210 05:53:31.669951   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:31.670321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:31.670376   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:32.169884   51953 type.go:168] "Request Body" body=""
	I1210 05:53:32.169959   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:32.170251   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:32.669923   51953 type.go:168] "Request Body" body=""
	I1210 05:53:32.669991   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:32.670335   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:33.169817   51953 type.go:168] "Request Body" body=""
	I1210 05:53:33.169889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:33.170220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:33.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:53:33.669885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:33.670198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:34.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:53:34.169833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:34.170101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:34.170150   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:34.670055   51953 type.go:168] "Request Body" body=""
	I1210 05:53:34.670124   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:34.670458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:35.170164   51953 type.go:168] "Request Body" body=""
	I1210 05:53:35.170239   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:35.170615   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:35.670406   51953 type.go:168] "Request Body" body=""
	I1210 05:53:35.670480   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:35.670747   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:36.170521   51953 type.go:168] "Request Body" body=""
	I1210 05:53:36.170600   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:36.170924   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:36.170976   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:36.670598   51953 type.go:168] "Request Body" body=""
	I1210 05:53:36.670673   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:36.671006   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:37.170525   51953 type.go:168] "Request Body" body=""
	I1210 05:53:37.170598   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:37.170929   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:37.670698   51953 type.go:168] "Request Body" body=""
	I1210 05:53:37.670771   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:37.671111   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:38.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:53:38.169821   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:38.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:38.670414   51953 type.go:168] "Request Body" body=""
	I1210 05:53:38.670482   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:38.670791   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:38.670843   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:39.170611   51953 type.go:168] "Request Body" body=""
	I1210 05:53:39.170682   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:39.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:39.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:53:39.669833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:39.670145   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:40.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:53:40.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:40.170087   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:40.669801   51953 type.go:168] "Request Body" body=""
	I1210 05:53:40.669881   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:40.670219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:41.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:53:41.169995   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:41.170355   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:41.170412   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:41.670056   51953 type.go:168] "Request Body" body=""
	I1210 05:53:41.670122   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:41.670440   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:42.169858   51953 type.go:168] "Request Body" body=""
	I1210 05:53:42.169947   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:42.170336   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:42.670088   51953 type.go:168] "Request Body" body=""
	I1210 05:53:42.670163   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:42.670484   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:43.170162   51953 type.go:168] "Request Body" body=""
	I1210 05:53:43.170230   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:43.170547   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:43.170609   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:43.670381   51953 type.go:168] "Request Body" body=""
	I1210 05:53:43.670459   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:43.670797   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:44.170478   51953 type.go:168] "Request Body" body=""
	I1210 05:53:44.170553   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:44.170917   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:44.670710   51953 type.go:168] "Request Body" body=""
	I1210 05:53:44.670781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:44.671096   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:45.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:53:45.169927   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:45.170248   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:45.670167   51953 type.go:168] "Request Body" body=""
	I1210 05:53:45.670243   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:45.670596   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:45.670654   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:46.170356   51953 type.go:168] "Request Body" body=""
	I1210 05:53:46.170470   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:46.170775   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:46.670631   51953 type.go:168] "Request Body" body=""
	I1210 05:53:46.670706   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:46.671056   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:47.169777   51953 type.go:168] "Request Body" body=""
	I1210 05:53:47.169864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:47.170180   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:47.670484   51953 type.go:168] "Request Body" body=""
	I1210 05:53:47.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:47.670850   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:47.670896   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:48.170703   51953 type.go:168] "Request Body" body=""
	I1210 05:53:48.170773   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:48.171186   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:48.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:53:48.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:48.670270   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:49.170239   51953 type.go:168] "Request Body" body=""
	I1210 05:53:49.170314   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:49.170632   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:49.670158   51953 type.go:168] "Request Body" body=""
	I1210 05:53:49.670250   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:49.670638   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:50.170456   51953 type.go:168] "Request Body" body=""
	I1210 05:53:50.170536   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:50.170897   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:50.170949   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:50.670681   51953 type.go:168] "Request Body" body=""
	I1210 05:53:50.670750   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:50.671080   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:51.169790   51953 type.go:168] "Request Body" body=""
	I1210 05:53:51.169870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:51.170201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:51.669911   51953 type.go:168] "Request Body" body=""
	I1210 05:53:51.669983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:51.670289   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:52.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:53:52.169885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:52.170158   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:52.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:53:52.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:52.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:52.670299   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:53.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:53:53.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:53.170220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:53.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:53:53.669864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:53.670142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:54.169882   51953 type.go:168] "Request Body" body=""
	I1210 05:53:54.169960   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:54.170294   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:54.670138   51953 type.go:168] "Request Body" body=""
	I1210 05:53:54.670217   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:54.670514   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:54.670556   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:55.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:53:55.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:55.170092   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:55.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:53:55.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:55.670235   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:56.169933   51953 type.go:168] "Request Body" body=""
	I1210 05:53:56.170005   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:56.170326   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:56.669987   51953 type.go:168] "Request Body" body=""
	I1210 05:53:56.670052   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:56.670317   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:57.170000   51953 type.go:168] "Request Body" body=""
	I1210 05:53:57.170105   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:57.170463   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:57.170520   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:57.670190   51953 type.go:168] "Request Body" body=""
	I1210 05:53:57.670263   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:57.670595   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:58.170369   51953 type.go:168] "Request Body" body=""
	I1210 05:53:58.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:58.170773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:58.670583   51953 type.go:168] "Request Body" body=""
	I1210 05:53:58.670669   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:58.671047   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:59.170051   51953 type.go:168] "Request Body" body=""
	I1210 05:53:59.170137   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:59.170479   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:59.170549   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:59.669763   51953 type.go:168] "Request Body" body=""
	I1210 05:53:59.669831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:59.670493   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:54:00.169895   51953 type.go:168] "Request Body" body=""
	I1210 05:54:00.170210   51953 node_ready.go:38] duration metric: took 6m0.000621671s for node "functional-644034" to be "Ready" ...
	I1210 05:54:00.173449   51953 out.go:203] 
	W1210 05:54:00.176680   51953 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 05:54:00.176713   51953 out.go:285] * 
	* 
	W1210 05:54:00.178858   51953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:54:00.215003   51953 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-644034 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.307087299s for "functional-644034" cluster.
I1210 05:54:00.817484    4116 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 2 (465.809501ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-644034 logs -n 25: (1.185220642s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-944360 image load --daemon kicbase/echo-server:functional-944360 --alsologtostderr                                                                   │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh            │ functional-944360 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh            │ functional-944360 ssh sudo cat /etc/test/nested/copy/4116/hosts                                                                                                 │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image load --daemon kicbase/echo-server:functional-944360 --alsologtostderr                                                                   │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ update-context │ functional-944360 update-context --alsologtostderr -v=2                                                                                                         │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ update-context │ functional-944360 update-context --alsologtostderr -v=2                                                                                                         │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image save kicbase/echo-server:functional-944360 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ update-context │ functional-944360 update-context --alsologtostderr -v=2                                                                                                         │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image rm kicbase/echo-server:functional-944360 --alsologtostderr                                                                              │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image save --daemon kicbase/echo-server:functional-944360 --alsologtostderr                                                                   │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls --format short --alsologtostderr                                                                                                     │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls --format yaml --alsologtostderr                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh            │ functional-944360 ssh pgrep buildkitd                                                                                                                           │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ image          │ functional-944360 image ls --format json --alsologtostderr                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls --format table --alsologtostderr                                                                                                     │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image build -t localhost/my-image:functional-944360 testdata/build --alsologtostderr                                                          │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ delete         │ -p functional-944360                                                                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ start          │ -p functional-644034 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ start          │ -p functional-644034 --alsologtostderr -v=8                                                                                                                     │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:47:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:47:54.556574   51953 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:47:54.556774   51953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:54.556804   51953 out.go:374] Setting ErrFile to fd 2...
	I1210 05:47:54.556824   51953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:54.557680   51953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:47:54.558123   51953 out.go:368] Setting JSON to false
	I1210 05:47:54.558985   51953 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1825,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:47:54.559094   51953 start.go:143] virtualization:  
	I1210 05:47:54.562634   51953 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:47:54.566518   51953 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:47:54.566592   51953 notify.go:221] Checking for updates...
	I1210 05:47:54.572379   51953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:47:54.575335   51953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:54.578363   51953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:47:54.581210   51953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:47:54.584186   51953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:47:54.587618   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:54.587759   51953 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:47:54.618368   51953 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:47:54.618493   51953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:47:54.683662   51953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 05:47:54.67215006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:47:54.683767   51953 docker.go:319] overlay module found
	I1210 05:47:54.686996   51953 out.go:179] * Using the docker driver based on existing profile
	I1210 05:47:54.689865   51953 start.go:309] selected driver: docker
	I1210 05:47:54.689883   51953 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:54.689998   51953 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:47:54.690096   51953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:47:54.769093   51953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 05:47:54.760185758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:47:54.769542   51953 cni.go:84] Creating CNI manager for ""
	I1210 05:47:54.769597   51953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:47:54.769652   51953 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:54.772754   51953 out.go:179] * Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	I1210 05:47:54.775504   51953 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 05:47:54.778330   51953 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:47:54.781109   51953 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:47:54.781186   51953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:47:54.800171   51953 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:47:54.800192   51953 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 05:47:54.839003   51953 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 05:47:55.003206   51953 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 05:47:55.003455   51953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json ...
	I1210 05:47:55.003769   51953 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:47:55.003826   51953 start.go:360] acquireMachinesLock for functional-644034: {Name:mk0dde6f976baac8ab90670ad27c806ab702c4c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.003903   51953 start.go:364] duration metric: took 49.001µs to acquireMachinesLock for "functional-644034"
	I1210 05:47:55.003933   51953 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:47:55.003940   51953 fix.go:54] fixHost starting: 
	I1210 05:47:55.004094   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.004258   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:55.028659   51953 fix.go:112] recreateIfNeeded on functional-644034: state=Running err=<nil>
	W1210 05:47:55.028694   51953 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:47:55.031932   51953 out.go:252] * Updating the running docker "functional-644034" container ...
	I1210 05:47:55.031977   51953 machine.go:94] provisionDockerMachine start ...
	I1210 05:47:55.032062   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.055133   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.055465   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.055479   51953 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:47:55.170848   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.207999   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:47:55.208023   51953 ubuntu.go:182] provisioning hostname "functional-644034"
	I1210 05:47:55.208102   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.228767   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.229073   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.229085   51953 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-644034 && echo "functional-644034" | sudo tee /etc/hostname
	I1210 05:47:55.357858   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.390746   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:47:55.390831   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.434495   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.434811   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.434828   51953 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-644034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644034/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-644034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:47:55.523319   51953 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523359   51953 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523419   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:47:55.523430   51953 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 131.759µs
	I1210 05:47:55.523435   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 05:47:55.523445   51953 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 87.246µs
	I1210 05:47:55.523453   51953 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523438   51953 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:47:55.523449   51953 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523467   51953 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523481   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:47:55.523488   51953 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.262µs
	I1210 05:47:55.523494   51953 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:47:55.523503   51953 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523523   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 05:47:55.523531   51953 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 65.428µs
	I1210 05:47:55.523538   51953 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523542   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 05:47:55.523548   51953 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 45.473µs
	I1210 05:47:55.523554   51953 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 05:47:55.523548   51953 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523565   51953 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523317   51953 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523599   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 05:47:55.523607   51953 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.7µs
	I1210 05:47:55.523610   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 05:47:55.523613   51953 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 05:47:55.523600   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 05:47:55.523617   51953 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 70.203µs
	I1210 05:47:55.523622   51953 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 325.49µs
	I1210 05:47:55.523626   51953 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523628   51953 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523644   51953 cache.go:87] Successfully saved all images to host disk.
	I1210 05:47:55.587205   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:47:55.587232   51953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 05:47:55.587288   51953 ubuntu.go:190] setting up certificates
	I1210 05:47:55.587298   51953 provision.go:84] configureAuth start
	I1210 05:47:55.587369   51953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:47:55.604738   51953 provision.go:143] copyHostCerts
	I1210 05:47:55.604778   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:47:55.604816   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 05:47:55.604828   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:47:55.604905   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 05:47:55.605000   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:47:55.605022   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 05:47:55.605029   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:47:55.605061   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 05:47:55.605114   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:47:55.605134   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 05:47:55.605139   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:47:55.605169   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 05:47:55.605233   51953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.functional-644034 san=[127.0.0.1 192.168.49.2 functional-644034 localhost minikube]
	I1210 05:47:55.781276   51953 provision.go:177] copyRemoteCerts
	I1210 05:47:55.781365   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:47:55.781432   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.797956   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:55.902711   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 05:47:55.902771   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:47:55.919779   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 05:47:55.919840   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:47:55.936935   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 05:47:55.936994   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:47:55.953689   51953 provision.go:87] duration metric: took 366.363656ms to configureAuth
	I1210 05:47:55.953721   51953 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:47:55.953915   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:55.953927   51953 machine.go:97] duration metric: took 921.944178ms to provisionDockerMachine
	I1210 05:47:55.953936   51953 start.go:293] postStartSetup for "functional-644034" (driver="docker")
	I1210 05:47:55.953952   51953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:47:55.954004   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:47:55.954054   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.971130   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.075277   51953 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:47:56.078673   51953 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 05:47:56.078694   51953 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 05:47:56.078699   51953 command_runner.go:130] > VERSION_ID="12"
	I1210 05:47:56.078704   51953 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 05:47:56.078708   51953 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 05:47:56.078712   51953 command_runner.go:130] > ID=debian
	I1210 05:47:56.078717   51953 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 05:47:56.078725   51953 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 05:47:56.078732   51953 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 05:47:56.078800   51953 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:47:56.078828   51953 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:47:56.078840   51953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 05:47:56.078899   51953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 05:47:56.078986   51953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 05:47:56.078998   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1210 05:47:56.079103   51953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> hosts in /etc/test/nested/copy/4116
	I1210 05:47:56.079112   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> /etc/test/nested/copy/4116/hosts
	I1210 05:47:56.079156   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4116
	I1210 05:47:56.086554   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:47:56.104005   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts --> /etc/test/nested/copy/4116/hosts (40 bytes)
	I1210 05:47:56.121596   51953 start.go:296] duration metric: took 167.644644ms for postStartSetup
	I1210 05:47:56.121686   51953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:47:56.121728   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.138924   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.243468   51953 command_runner.go:130] > 14%
	I1210 05:47:56.243960   51953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:47:56.248281   51953 command_runner.go:130] > 169G
	I1210 05:47:56.248748   51953 fix.go:56] duration metric: took 1.244804723s for fixHost
	I1210 05:47:56.248771   51953 start.go:83] releasing machines lock for "functional-644034", held for 1.24485909s
	I1210 05:47:56.248837   51953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:47:56.266070   51953 ssh_runner.go:195] Run: cat /version.json
	I1210 05:47:56.266123   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.266146   51953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:47:56.266199   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.283872   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.284272   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.472387   51953 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 05:47:56.475023   51953 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 05:47:56.475222   51953 ssh_runner.go:195] Run: systemctl --version
	I1210 05:47:56.481051   51953 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 05:47:56.481144   51953 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 05:47:56.481557   51953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 05:47:56.485740   51953 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 05:47:56.485802   51953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:47:56.485889   51953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:47:56.493391   51953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:47:56.493413   51953 start.go:496] detecting cgroup driver to use...
	I1210 05:47:56.493443   51953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:47:56.493499   51953 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 05:47:56.508720   51953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:47:56.521711   51953 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:47:56.521777   51953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:47:56.537527   51953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:47:56.551315   51953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:47:56.656595   51953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:47:56.765354   51953 docker.go:234] disabling docker service ...
	I1210 05:47:56.765422   51953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:47:56.780352   51953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:47:56.793570   51953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:47:56.900961   51953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:47:57.025824   51953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:47:57.039104   51953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:47:57.052658   51953 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 05:47:57.053978   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.213891   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:47:57.223164   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 05:47:57.232001   51953 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:47:57.232070   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:47:57.240776   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:47:57.249302   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:47:57.258094   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:47:57.266381   51953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:47:57.274230   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:47:57.282766   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:47:57.291675   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:47:57.300542   51953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:47:57.307150   51953 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 05:47:57.308059   51953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:47:57.315237   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:57.433904   51953 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:47:57.552794   51953 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 05:47:57.552901   51953 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 05:47:57.556769   51953 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1210 05:47:57.556839   51953 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 05:47:57.556861   51953 command_runner.go:130] > Device: 0,73	Inode: 1614        Links: 1
	I1210 05:47:57.556893   51953 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:47:57.556921   51953 command_runner.go:130] > Access: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.556947   51953 command_runner.go:130] > Modify: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.556968   51953 command_runner.go:130] > Change: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.557011   51953 command_runner.go:130] >  Birth: -
	I1210 05:47:57.557078   51953 start.go:564] Will wait 60s for crictl version
	I1210 05:47:57.557155   51953 ssh_runner.go:195] Run: which crictl
	I1210 05:47:57.560538   51953 command_runner.go:130] > /usr/local/bin/crictl
	I1210 05:47:57.560706   51953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:47:57.582482   51953 command_runner.go:130] > Version:  0.1.0
	I1210 05:47:57.582585   51953 command_runner.go:130] > RuntimeName:  containerd
	I1210 05:47:57.582609   51953 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1210 05:47:57.582715   51953 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 05:47:57.584523   51953 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 05:47:57.584650   51953 ssh_runner.go:195] Run: containerd --version
	I1210 05:47:57.601892   51953 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 05:47:57.603507   51953 ssh_runner.go:195] Run: containerd --version
	I1210 05:47:57.622429   51953 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 05:47:57.630007   51953 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 05:47:57.632949   51953 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:47:57.648626   51953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:47:57.652604   51953 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 05:47:57.652711   51953 kubeadm.go:884] updating cluster {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:47:57.652889   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.820648   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.971830   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:58.124406   51953 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:47:58.124495   51953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:47:58.146688   51953 command_runner.go:130] > {
	I1210 05:47:58.146710   51953 command_runner.go:130] >   "images":  [
	I1210 05:47:58.146724   51953 command_runner.go:130] >     {
	I1210 05:47:58.146735   51953 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1210 05:47:58.146741   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146747   51953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 05:47:58.146750   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146755   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146765   51953 command_runner.go:130] >       "size":  "8032639",
	I1210 05:47:58.146779   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146784   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146790   51953 command_runner.go:130] >     },
	I1210 05:47:58.146794   51953 command_runner.go:130] >     {
	I1210 05:47:58.146801   51953 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 05:47:58.146808   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146813   51953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 05:47:58.146817   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146821   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146830   51953 command_runner.go:130] >       "size":  "21166088",
	I1210 05:47:58.146837   51953 command_runner.go:130] >       "username":  "nonroot",
	I1210 05:47:58.146841   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146844   51953 command_runner.go:130] >     },
	I1210 05:47:58.146847   51953 command_runner.go:130] >     {
	I1210 05:47:58.146855   51953 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1210 05:47:58.146861   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146867   51953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1210 05:47:58.146873   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146878   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146885   51953 command_runner.go:130] >       "size":  "21748497",
	I1210 05:47:58.146888   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.146897   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.146904   51953 command_runner.go:130] >       },
	I1210 05:47:58.146908   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146912   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146917   51953 command_runner.go:130] >     },
	I1210 05:47:58.146925   51953 command_runner.go:130] >     {
	I1210 05:47:58.146933   51953 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1210 05:47:58.146939   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146948   51953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1210 05:47:58.146955   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146959   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146964   51953 command_runner.go:130] >       "size":  "24690149",
	I1210 05:47:58.146967   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.146972   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.146975   51953 command_runner.go:130] >       },
	I1210 05:47:58.146979   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146985   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146990   51953 command_runner.go:130] >     },
	I1210 05:47:58.146996   51953 command_runner.go:130] >     {
	I1210 05:47:58.147003   51953 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1210 05:47:58.147007   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147030   51953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1210 05:47:58.147034   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147038   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147042   51953 command_runner.go:130] >       "size":  "20670083",
	I1210 05:47:58.147046   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147050   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.147056   51953 command_runner.go:130] >       },
	I1210 05:47:58.147060   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147067   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147070   51953 command_runner.go:130] >     },
	I1210 05:47:58.147081   51953 command_runner.go:130] >     {
	I1210 05:47:58.147088   51953 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1210 05:47:58.147092   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147099   51953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1210 05:47:58.147103   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147107   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147111   51953 command_runner.go:130] >       "size":  "22430795",
	I1210 05:47:58.147122   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147127   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147132   51953 command_runner.go:130] >     },
	I1210 05:47:58.147135   51953 command_runner.go:130] >     {
	I1210 05:47:58.147144   51953 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1210 05:47:58.147150   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147155   51953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1210 05:47:58.147161   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147173   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147180   51953 command_runner.go:130] >       "size":  "15403461",
	I1210 05:47:58.147183   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147187   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.147190   51953 command_runner.go:130] >       },
	I1210 05:47:58.147194   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147198   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147205   51953 command_runner.go:130] >     },
	I1210 05:47:58.147208   51953 command_runner.go:130] >     {
	I1210 05:47:58.147215   51953 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 05:47:58.147221   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147226   51953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 05:47:58.147232   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147236   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147248   51953 command_runner.go:130] >       "size":  "265458",
	I1210 05:47:58.147252   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147256   51953 command_runner.go:130] >         "value":  "65535"
	I1210 05:47:58.147259   51953 command_runner.go:130] >       },
	I1210 05:47:58.147270   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147274   51953 command_runner.go:130] >       "pinned":  true
	I1210 05:47:58.147277   51953 command_runner.go:130] >     }
	I1210 05:47:58.147282   51953 command_runner.go:130] >   ]
	I1210 05:47:58.147284   51953 command_runner.go:130] > }
	I1210 05:47:58.149521   51953 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 05:47:58.149540   51953 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:47:58.149552   51953 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1210 05:47:58.149645   51953 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:47:58.149706   51953 ssh_runner.go:195] Run: sudo crictl info
	I1210 05:47:58.176587   51953 command_runner.go:130] > {
	I1210 05:47:58.176610   51953 command_runner.go:130] >   "cniconfig": {
	I1210 05:47:58.176616   51953 command_runner.go:130] >     "Networks": [
	I1210 05:47:58.176620   51953 command_runner.go:130] >       {
	I1210 05:47:58.176624   51953 command_runner.go:130] >         "Config": {
	I1210 05:47:58.176629   51953 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1210 05:47:58.176644   51953 command_runner.go:130] >           "Name": "cni-loopback",
	I1210 05:47:58.176648   51953 command_runner.go:130] >           "Plugins": [
	I1210 05:47:58.176652   51953 command_runner.go:130] >             {
	I1210 05:47:58.176657   51953 command_runner.go:130] >               "Network": {
	I1210 05:47:58.176662   51953 command_runner.go:130] >                 "ipam": {},
	I1210 05:47:58.176673   51953 command_runner.go:130] >                 "type": "loopback"
	I1210 05:47:58.176678   51953 command_runner.go:130] >               },
	I1210 05:47:58.176687   51953 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1210 05:47:58.176691   51953 command_runner.go:130] >             }
	I1210 05:47:58.176694   51953 command_runner.go:130] >           ],
	I1210 05:47:58.176704   51953 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1210 05:47:58.176717   51953 command_runner.go:130] >         },
	I1210 05:47:58.176725   51953 command_runner.go:130] >         "IFName": "lo"
	I1210 05:47:58.176728   51953 command_runner.go:130] >       }
	I1210 05:47:58.176732   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176736   51953 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1210 05:47:58.176742   51953 command_runner.go:130] >     "PluginDirs": [
	I1210 05:47:58.176746   51953 command_runner.go:130] >       "/opt/cni/bin"
	I1210 05:47:58.176752   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176756   51953 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1210 05:47:58.176771   51953 command_runner.go:130] >     "Prefix": "eth"
	I1210 05:47:58.176775   51953 command_runner.go:130] >   },
	I1210 05:47:58.176782   51953 command_runner.go:130] >   "config": {
	I1210 05:47:58.176789   51953 command_runner.go:130] >     "cdiSpecDirs": [
	I1210 05:47:58.176793   51953 command_runner.go:130] >       "/etc/cdi",
	I1210 05:47:58.176797   51953 command_runner.go:130] >       "/var/run/cdi"
	I1210 05:47:58.176803   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176807   51953 command_runner.go:130] >     "cni": {
	I1210 05:47:58.176813   51953 command_runner.go:130] >       "binDir": "",
	I1210 05:47:58.176817   51953 command_runner.go:130] >       "binDirs": [
	I1210 05:47:58.176821   51953 command_runner.go:130] >         "/opt/cni/bin"
	I1210 05:47:58.176825   51953 command_runner.go:130] >       ],
	I1210 05:47:58.176836   51953 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1210 05:47:58.176840   51953 command_runner.go:130] >       "confTemplate": "",
	I1210 05:47:58.176844   51953 command_runner.go:130] >       "ipPref": "",
	I1210 05:47:58.176850   51953 command_runner.go:130] >       "maxConfNum": 1,
	I1210 05:47:58.176854   51953 command_runner.go:130] >       "setupSerially": false,
	I1210 05:47:58.176861   51953 command_runner.go:130] >       "useInternalLoopback": false
	I1210 05:47:58.176864   51953 command_runner.go:130] >     },
	I1210 05:47:58.176874   51953 command_runner.go:130] >     "containerd": {
	I1210 05:47:58.176880   51953 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1210 05:47:58.176886   51953 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1210 05:47:58.176892   51953 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1210 05:47:58.176901   51953 command_runner.go:130] >       "runtimes": {
	I1210 05:47:58.176905   51953 command_runner.go:130] >         "runc": {
	I1210 05:47:58.176909   51953 command_runner.go:130] >           "ContainerAnnotations": null,
	I1210 05:47:58.176915   51953 command_runner.go:130] >           "PodAnnotations": null,
	I1210 05:47:58.176920   51953 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1210 05:47:58.176926   51953 command_runner.go:130] >           "cgroupWritable": false,
	I1210 05:47:58.176930   51953 command_runner.go:130] >           "cniConfDir": "",
	I1210 05:47:58.176934   51953 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1210 05:47:58.176939   51953 command_runner.go:130] >           "io_type": "",
	I1210 05:47:58.176943   51953 command_runner.go:130] >           "options": {
	I1210 05:47:58.176950   51953 command_runner.go:130] >             "BinaryName": "",
	I1210 05:47:58.176955   51953 command_runner.go:130] >             "CriuImagePath": "",
	I1210 05:47:58.176970   51953 command_runner.go:130] >             "CriuWorkPath": "",
	I1210 05:47:58.176977   51953 command_runner.go:130] >             "IoGid": 0,
	I1210 05:47:58.176981   51953 command_runner.go:130] >             "IoUid": 0,
	I1210 05:47:58.176985   51953 command_runner.go:130] >             "NoNewKeyring": false,
	I1210 05:47:58.176991   51953 command_runner.go:130] >             "Root": "",
	I1210 05:47:58.176995   51953 command_runner.go:130] >             "ShimCgroup": "",
	I1210 05:47:58.177002   51953 command_runner.go:130] >             "SystemdCgroup": false
	I1210 05:47:58.177005   51953 command_runner.go:130] >           },
	I1210 05:47:58.177011   51953 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1210 05:47:58.177019   51953 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1210 05:47:58.177023   51953 command_runner.go:130] >           "runtimePath": "",
	I1210 05:47:58.177030   51953 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1210 05:47:58.177035   51953 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1210 05:47:58.177041   51953 command_runner.go:130] >           "snapshotter": ""
	I1210 05:47:58.177044   51953 command_runner.go:130] >         }
	I1210 05:47:58.177049   51953 command_runner.go:130] >       }
	I1210 05:47:58.177052   51953 command_runner.go:130] >     },
	I1210 05:47:58.177065   51953 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1210 05:47:58.177073   51953 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1210 05:47:58.177078   51953 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1210 05:47:58.177083   51953 command_runner.go:130] >     "disableApparmor": false,
	I1210 05:47:58.177090   51953 command_runner.go:130] >     "disableHugetlbController": true,
	I1210 05:47:58.177094   51953 command_runner.go:130] >     "disableProcMount": false,
	I1210 05:47:58.177098   51953 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1210 05:47:58.177102   51953 command_runner.go:130] >     "enableCDI": true,
	I1210 05:47:58.177106   51953 command_runner.go:130] >     "enableSelinux": false,
	I1210 05:47:58.177114   51953 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1210 05:47:58.177118   51953 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1210 05:47:58.177125   51953 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1210 05:47:58.177130   51953 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1210 05:47:58.177138   51953 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1210 05:47:58.177142   51953 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1210 05:47:58.177147   51953 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1210 05:47:58.177160   51953 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1210 05:47:58.177170   51953 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1210 05:47:58.177176   51953 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1210 05:47:58.177186   51953 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1210 05:47:58.177190   51953 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1210 05:47:58.177193   51953 command_runner.go:130] >   },
	I1210 05:47:58.177197   51953 command_runner.go:130] >   "features": {
	I1210 05:47:58.177201   51953 command_runner.go:130] >     "supplemental_groups_policy": true
	I1210 05:47:58.177204   51953 command_runner.go:130] >   },
	I1210 05:47:58.177209   51953 command_runner.go:130] >   "golang": "go1.24.9",
	I1210 05:47:58.177221   51953 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 05:47:58.177233   51953 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 05:47:58.177237   51953 command_runner.go:130] >   "runtimeHandlers": [
	I1210 05:47:58.177246   51953 command_runner.go:130] >     {
	I1210 05:47:58.177250   51953 command_runner.go:130] >       "features": {
	I1210 05:47:58.177255   51953 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 05:47:58.177259   51953 command_runner.go:130] >         "user_namespaces": true
	I1210 05:47:58.177261   51953 command_runner.go:130] >       }
	I1210 05:47:58.177264   51953 command_runner.go:130] >     },
	I1210 05:47:58.177267   51953 command_runner.go:130] >     {
	I1210 05:47:58.177271   51953 command_runner.go:130] >       "features": {
	I1210 05:47:58.177275   51953 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 05:47:58.177279   51953 command_runner.go:130] >         "user_namespaces": true
	I1210 05:47:58.177282   51953 command_runner.go:130] >       },
	I1210 05:47:58.177287   51953 command_runner.go:130] >       "name": "runc"
	I1210 05:47:58.177289   51953 command_runner.go:130] >     }
	I1210 05:47:58.177293   51953 command_runner.go:130] >   ],
	I1210 05:47:58.177296   51953 command_runner.go:130] >   "status": {
	I1210 05:47:58.177300   51953 command_runner.go:130] >     "conditions": [
	I1210 05:47:58.177303   51953 command_runner.go:130] >       {
	I1210 05:47:58.177307   51953 command_runner.go:130] >         "message": "",
	I1210 05:47:58.177314   51953 command_runner.go:130] >         "reason": "",
	I1210 05:47:58.177318   51953 command_runner.go:130] >         "status": true,
	I1210 05:47:58.177329   51953 command_runner.go:130] >         "type": "RuntimeReady"
	I1210 05:47:58.177335   51953 command_runner.go:130] >       },
	I1210 05:47:58.177339   51953 command_runner.go:130] >       {
	I1210 05:47:58.177345   51953 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1210 05:47:58.177356   51953 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1210 05:47:58.177360   51953 command_runner.go:130] >         "status": false,
	I1210 05:47:58.177365   51953 command_runner.go:130] >         "type": "NetworkReady"
	I1210 05:47:58.177373   51953 command_runner.go:130] >       },
	I1210 05:47:58.177376   51953 command_runner.go:130] >       {
	I1210 05:47:58.177397   51953 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1210 05:47:58.177406   51953 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1210 05:47:58.177414   51953 command_runner.go:130] >         "status": false,
	I1210 05:47:58.177420   51953 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1210 05:47:58.177425   51953 command_runner.go:130] >       }
	I1210 05:47:58.177428   51953 command_runner.go:130] >     ]
	I1210 05:47:58.177431   51953 command_runner.go:130] >   }
	I1210 05:47:58.177434   51953 command_runner.go:130] > }
	I1210 05:47:58.177746   51953 cni.go:84] Creating CNI manager for ""
	I1210 05:47:58.177762   51953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:47:58.177786   51953 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:47:58.177809   51953 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644034 NodeName:functional-644034 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:47:58.177931   51953 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-644034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:47:58.178005   51953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:47:58.184894   51953 command_runner.go:130] > kubeadm
	I1210 05:47:58.184912   51953 command_runner.go:130] > kubectl
	I1210 05:47:58.184916   51953 command_runner.go:130] > kubelet
	I1210 05:47:58.185786   51953 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:47:58.185866   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:47:58.193140   51953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 05:47:58.205426   51953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:47:58.217773   51953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 05:47:58.230424   51953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:47:58.234124   51953 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 05:47:58.234224   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:58.348721   51953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:47:58.367663   51953 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034 for IP: 192.168.49.2
	I1210 05:47:58.367683   51953 certs.go:195] generating shared ca certs ...
	I1210 05:47:58.367699   51953 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:58.367828   51953 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 05:47:58.367870   51953 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 05:47:58.367878   51953 certs.go:257] generating profile certs ...
	I1210 05:47:58.367976   51953 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key
	I1210 05:47:58.368034   51953 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c
	I1210 05:47:58.368079   51953 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key
	I1210 05:47:58.368088   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 05:47:58.368100   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 05:47:58.368115   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 05:47:58.368126   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 05:47:58.368137   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 05:47:58.368148   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 05:47:58.368163   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 05:47:58.368174   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 05:47:58.368220   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 05:47:58.368248   51953 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 05:47:58.368256   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:47:58.368286   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:47:58.368309   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:47:58.368331   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 05:47:58.368373   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:47:58.368402   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.368414   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.368427   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.368978   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:47:58.388893   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:47:58.409416   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:47:58.428450   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:47:58.446489   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:47:58.465644   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:47:58.483264   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:47:58.500807   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:47:58.518107   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 05:47:58.536070   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 05:47:58.553632   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:47:58.571692   51953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:47:58.584898   51953 ssh_runner.go:195] Run: openssl version
	I1210 05:47:58.590608   51953 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 05:47:58.591139   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.599076   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 05:47:58.606632   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610200   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610255   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610308   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.650574   51953 command_runner.go:130] > 51391683
	I1210 05:47:58.651004   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:47:58.658249   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.665388   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 05:47:58.672651   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676295   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676329   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676381   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.716661   51953 command_runner.go:130] > 3ec20f2e
	I1210 05:47:58.717156   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:47:58.724496   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.731755   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:47:58.739224   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742739   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742773   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742827   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.783109   51953 command_runner.go:130] > b5213941
	I1210 05:47:58.783531   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:47:58.790793   51953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:47:58.794232   51953 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:47:58.794258   51953 command_runner.go:130] >   Size: 1172      	Blocks: 8          IO Block: 4096   regular file
	I1210 05:47:58.794265   51953 command_runner.go:130] > Device: 259,1	Inode: 1307887     Links: 1
	I1210 05:47:58.794272   51953 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:47:58.794286   51953 command_runner.go:130] > Access: 2025-12-10 05:43:51.022657545 +0000
	I1210 05:47:58.794292   51953 command_runner.go:130] > Modify: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794297   51953 command_runner.go:130] > Change: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794305   51953 command_runner.go:130] >  Birth: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794558   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:47:58.837377   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.837465   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:47:58.877636   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.878121   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:47:58.918797   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.919235   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:47:58.959487   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.960010   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:47:59.003251   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:59.003763   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:47:59.044279   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:59.044747   51953 kubeadm.go:401] StartCluster: {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:59.044823   51953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 05:47:59.044887   51953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:47:59.069970   51953 cri.go:89] found id: ""
	I1210 05:47:59.070038   51953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:47:59.076652   51953 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 05:47:59.076673   51953 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 05:47:59.076679   51953 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 05:47:59.077535   51953 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:47:59.077555   51953 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:47:59.077617   51953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:47:59.084671   51953 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:47:59.085448   51953 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-644034" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.085850   51953 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "functional-644034" cluster setting kubeconfig missing "functional-644034" context setting]
	I1210 05:47:59.086310   51953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.087190   51953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.087371   51953 kapi.go:59] client config for functional-644034: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key", CAFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:47:59.088034   51953 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 05:47:59.088055   51953 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 05:47:59.088068   51953 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 05:47:59.088074   51953 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 05:47:59.088078   51953 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 05:47:59.088429   51953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:47:59.089407   51953 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:47:59.096980   51953 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 05:47:59.097014   51953 kubeadm.go:602] duration metric: took 19.453757ms to restartPrimaryControlPlane
	I1210 05:47:59.097024   51953 kubeadm.go:403] duration metric: took 52.281886ms to StartCluster
	I1210 05:47:59.097064   51953 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.097152   51953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.097734   51953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.097941   51953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 05:47:59.098267   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:59.098318   51953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 05:47:59.098380   51953 addons.go:70] Setting storage-provisioner=true in profile "functional-644034"
	I1210 05:47:59.098393   51953 addons.go:239] Setting addon storage-provisioner=true in "functional-644034"
	I1210 05:47:59.098419   51953 host.go:66] Checking if "functional-644034" exists ...
	I1210 05:47:59.098907   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.101905   51953 out.go:179] * Verifying Kubernetes components...
	I1210 05:47:59.106662   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:59.109785   51953 addons.go:70] Setting default-storageclass=true in profile "functional-644034"
	I1210 05:47:59.109823   51953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-644034"
	I1210 05:47:59.110155   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.137186   51953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:47:59.140065   51953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:47:59.140094   51953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:47:59.140172   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:59.152137   51953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.152308   51953 kapi.go:59] client config for functional-644034: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key", CAFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:47:59.152605   51953 addons.go:239] Setting addon default-storageclass=true in "functional-644034"
	I1210 05:47:59.152636   51953 host.go:66] Checking if "functional-644034" exists ...
	I1210 05:47:59.153047   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.173160   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:59.202277   51953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:47:59.202307   51953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:47:59.202368   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:59.232670   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:59.321380   51953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:47:59.337472   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:47:59.374986   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:00.169551   51953 node_ready.go:35] waiting up to 6m0s for node "functional-644034" to be "Ready" ...
	I1210 05:48:00.169689   51953 type.go:168] "Request Body" body=""
	I1210 05:48:00.169752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:00.170008   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.170051   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170077   51953 retry.go:31] will retry after 139.03743ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170121   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.170135   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170145   51953 retry.go:31] will retry after 348.331986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:00.310507   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:00.415931   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.416069   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.416135   51953 retry.go:31] will retry after 233.204425ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.519312   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:00.585157   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.585240   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.585274   51953 retry.go:31] will retry after 499.606359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.650447   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:00.669993   51953 type.go:168] "Request Body" body=""
	I1210 05:48:00.670098   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:00.670433   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:00.712181   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.715417   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.715449   51953 retry.go:31] will retry after 781.025556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.086035   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:01.148055   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.148095   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.148115   51953 retry.go:31] will retry after 644.355236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.170281   51953 type.go:168] "Request Body" body=""
	I1210 05:48:01.170372   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:01.170734   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:01.497246   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:01.552133   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.555247   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.555278   51953 retry.go:31] will retry after 1.200680207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.670555   51953 type.go:168] "Request Body" body=""
	I1210 05:48:01.670646   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:01.670959   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:01.793341   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:01.851452   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.854727   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.854768   51953 retry.go:31] will retry after 727.381606ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.170188   51953 type.go:168] "Request Body" body=""
	I1210 05:48:02.170290   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:02.170618   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:02.170696   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:02.583237   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:02.649935   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:02.649981   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.650022   51953 retry.go:31] will retry after 1.310515996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.670155   51953 type.go:168] "Request Body" body=""
	I1210 05:48:02.670292   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:02.670651   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:02.757075   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:02.818837   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:02.821796   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.821831   51953 retry.go:31] will retry after 1.687874073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:03.170317   51953 type.go:168] "Request Body" body=""
	I1210 05:48:03.170406   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:03.170707   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:03.670505   51953 type.go:168] "Request Body" body=""
	I1210 05:48:03.670583   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:03.670925   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:03.961404   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:04.024244   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:04.024282   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.024323   51953 retry.go:31] will retry after 1.628415395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.170524   51953 type.go:168] "Request Body" body=""
	I1210 05:48:04.170651   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:04.171067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:04.171129   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:04.510724   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:04.566617   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:04.570030   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.570064   51953 retry.go:31] will retry after 2.695563296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.670310   51953 type.go:168] "Request Body" body=""
	I1210 05:48:04.670389   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:04.670711   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.170563   51953 type.go:168] "Request Body" body=""
	I1210 05:48:05.170635   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:05.170967   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.653658   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:05.670351   51953 type.go:168] "Request Body" body=""
	I1210 05:48:05.670461   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:05.670799   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.744168   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:05.744207   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:05.744248   51953 retry.go:31] will retry after 1.470532715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:06.169848   51953 type.go:168] "Request Body" body=""
	I1210 05:48:06.169975   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:06.170317   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:06.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:48:06.669921   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:06.670264   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:06.670329   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:07.169978   51953 type.go:168] "Request Body" body=""
	I1210 05:48:07.170058   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:07.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:07.215626   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:07.266052   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:07.280336   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:07.280370   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.280387   51953 retry.go:31] will retry after 5.58106306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.333195   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:07.333236   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.333256   51953 retry.go:31] will retry after 2.610344026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.670753   51953 type.go:168] "Request Body" body=""
	I1210 05:48:07.670832   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:07.671195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:08.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:48:08.169912   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:08.170281   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:08.669773   51953 type.go:168] "Request Body" body=""
	I1210 05:48:08.669870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:08.670131   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:09.170132   51953 type.go:168] "Request Body" body=""
	I1210 05:48:09.170205   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:09.170536   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:09.170594   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:09.670237   51953 type.go:168] "Request Body" body=""
	I1210 05:48:09.670311   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:09.670667   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:09.944159   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:10.010561   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:10.010619   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:10.010642   51953 retry.go:31] will retry after 2.5620788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:10.169787   51953 type.go:168] "Request Body" body=""
	I1210 05:48:10.169854   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:10.170167   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:10.669895   51953 type.go:168] "Request Body" body=""
	I1210 05:48:10.669974   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:10.670363   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:11.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:48:11.169913   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:11.170234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:11.669750   51953 type.go:168] "Request Body" body=""
	I1210 05:48:11.669833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:11.670159   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:11.670233   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:12.169956   51953 type.go:168] "Request Body" body=""
	I1210 05:48:12.170030   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:12.170375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:12.572886   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:12.631295   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:12.634400   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.634432   51953 retry.go:31] will retry after 5.90622422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.670736   51953 type.go:168] "Request Body" body=""
	I1210 05:48:12.670808   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:12.671172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:12.862533   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:12.918893   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:12.918929   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.918949   51953 retry.go:31] will retry after 8.272023324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:13.170464   51953 type.go:168] "Request Body" body=""
	I1210 05:48:13.170532   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:13.170809   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:13.670589   51953 type.go:168] "Request Body" body=""
	I1210 05:48:13.670665   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:13.670979   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:13.671051   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:14.170623   51953 type.go:168] "Request Body" body=""
	I1210 05:48:14.170704   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:14.171052   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:14.669975   51953 type.go:168] "Request Body" body=""
	I1210 05:48:14.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:14.670351   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:15.170046   51953 type.go:168] "Request Body" body=""
	I1210 05:48:15.170119   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:15.170417   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:15.670099   51953 type.go:168] "Request Body" body=""
	I1210 05:48:15.670181   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:15.670515   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:16.169772   51953 type.go:168] "Request Body" body=""
	I1210 05:48:16.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:16.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:16.170210   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:16.669859   51953 type.go:168] "Request Body" body=""
	I1210 05:48:16.669945   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:16.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:17.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:48:17.169889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:17.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:17.669877   51953 type.go:168] "Request Body" body=""
	I1210 05:48:17.669969   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:17.670225   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:18.169971   51953 type.go:168] "Request Body" body=""
	I1210 05:48:18.170045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:18.170383   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:18.170445   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:18.540818   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:18.598871   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:18.601811   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:18.601841   51953 retry.go:31] will retry after 12.747843498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:18.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:18.670272   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:18.670582   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:19.170370   51953 type.go:168] "Request Body" body=""
	I1210 05:48:19.170452   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:19.170779   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:19.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:48:19.670779   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:19.671114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:20.169841   51953 type.go:168] "Request Body" body=""
	I1210 05:48:20.169920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:20.170286   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:20.669765   51953 type.go:168] "Request Body" body=""
	I1210 05:48:20.669841   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:20.670151   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:20.670209   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:21.169914   51953 type.go:168] "Request Body" body=""
	I1210 05:48:21.169987   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:21.170308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:21.191680   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:21.254244   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:21.254291   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:21.254309   51953 retry.go:31] will retry after 13.504528238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:21.669784   51953 type.go:168] "Request Body" body=""
	I1210 05:48:21.669884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:21.670234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:22.169864   51953 type.go:168] "Request Body" body=""
	I1210 05:48:22.169979   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:22.170274   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:22.670052   51953 type.go:168] "Request Body" body=""
	I1210 05:48:22.670132   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:22.670457   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:22.670511   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:23.170156   51953 type.go:168] "Request Body" body=""
	I1210 05:48:23.170275   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:23.170563   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:23.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:48:23.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:23.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:24.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:48:24.169911   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:24.170218   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:24.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:24.670237   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:24.670543   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:24.670597   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:25.170342   51953 type.go:168] "Request Body" body=""
	I1210 05:48:25.170412   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:25.170680   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:25.670543   51953 type.go:168] "Request Body" body=""
	I1210 05:48:25.670623   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:25.670919   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:26.170671   51953 type.go:168] "Request Body" body=""
	I1210 05:48:26.170749   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:26.171110   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:26.669682   51953 type.go:168] "Request Body" body=""
	I1210 05:48:26.669752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:26.670007   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:27.170402   51953 type.go:168] "Request Body" body=""
	I1210 05:48:27.170479   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:27.170798   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:27.170859   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:27.670357   51953 type.go:168] "Request Body" body=""
	I1210 05:48:27.670437   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:27.670773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:28.170551   51953 type.go:168] "Request Body" body=""
	I1210 05:48:28.170643   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:28.170896   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:28.670265   51953 type.go:168] "Request Body" body=""
	I1210 05:48:28.670338   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:28.670758   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:29.170472   51953 type.go:168] "Request Body" body=""
	I1210 05:48:29.170542   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:29.170877   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:29.170933   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:29.669736   51953 type.go:168] "Request Body" body=""
	I1210 05:48:29.669810   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:29.670135   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:30.169864   51953 type.go:168] "Request Body" body=""
	I1210 05:48:30.169940   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:30.170305   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:30.669879   51953 type.go:168] "Request Body" body=""
	I1210 05:48:30.669957   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:30.670279   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:31.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:48:31.169834   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:31.170092   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:31.350447   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:31.407735   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:31.410898   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:31.410931   51953 retry.go:31] will retry after 18.518112559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:31.670455   51953 type.go:168] "Request Body" body=""
	I1210 05:48:31.670542   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:31.670952   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:31.671005   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:32.170764   51953 type.go:168] "Request Body" body=""
	I1210 05:48:32.170837   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:32.171167   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:32.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:48:32.669900   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:32.670158   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:33.169858   51953 type.go:168] "Request Body" body=""
	I1210 05:48:33.169936   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:33.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:33.669974   51953 type.go:168] "Request Body" body=""
	I1210 05:48:33.670051   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:33.670366   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:34.170663   51953 type.go:168] "Request Body" body=""
	I1210 05:48:34.170730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:34.171001   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:34.171083   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:34.670065   51953 type.go:168] "Request Body" body=""
	I1210 05:48:34.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:34.670459   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:34.759888   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:34.813991   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:34.817148   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:34.817180   51953 retry.go:31] will retry after 7.858877757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:35.170714   51953 type.go:168] "Request Body" body=""
	I1210 05:48:35.170783   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:35.171144   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:35.669794   51953 type.go:168] "Request Body" body=""
	I1210 05:48:35.669884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:35.670145   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:36.169862   51953 type.go:168] "Request Body" body=""
	I1210 05:48:36.169932   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:36.170264   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:36.669949   51953 type.go:168] "Request Body" body=""
	I1210 05:48:36.670019   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:36.670336   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:36.670392   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:37.170023   51953 type.go:168] "Request Body" body=""
	I1210 05:48:37.170089   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:37.170351   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:37.670112   51953 type.go:168] "Request Body" body=""
	I1210 05:48:37.670187   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:37.670504   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:38.170212   51953 type.go:168] "Request Body" body=""
	I1210 05:48:38.170304   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:38.170601   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:38.670326   51953 type.go:168] "Request Body" body=""
	I1210 05:48:38.670390   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:38.670677   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:38.670718   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:39.170413   51953 type.go:168] "Request Body" body=""
	I1210 05:48:39.170487   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:39.170808   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:39.669722   51953 type.go:168] "Request Body" body=""
	I1210 05:48:39.669794   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:39.670121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:40.169742   51953 type.go:168] "Request Body" body=""
	I1210 05:48:40.169816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:40.170090   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:40.669786   51953 type.go:168] "Request Body" body=""
	I1210 05:48:40.669863   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:40.670230   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:41.169931   51953 type.go:168] "Request Body" body=""
	I1210 05:48:41.170003   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:41.170334   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:41.170388   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:41.670036   51953 type.go:168] "Request Body" body=""
	I1210 05:48:41.670109   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:41.670415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.170132   51953 type.go:168] "Request Body" body=""
	I1210 05:48:42.170213   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:42.170632   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.670451   51953 type.go:168] "Request Body" body=""
	I1210 05:48:42.670533   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:42.670872   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.677131   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:42.736218   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:42.736261   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:42.736279   51953 retry.go:31] will retry after 23.425189001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:43.170386   51953 type.go:168] "Request Body" body=""
	I1210 05:48:43.170464   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:43.170737   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:43.170779   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:43.670538   51953 type.go:168] "Request Body" body=""
	I1210 05:48:43.670609   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:43.670906   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:44.170640   51953 type.go:168] "Request Body" body=""
	I1210 05:48:44.170719   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:44.171057   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:44.669915   51953 type.go:168] "Request Body" body=""
	I1210 05:48:44.669983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:44.670265   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:45.170036   51953 type.go:168] "Request Body" body=""
	I1210 05:48:45.170123   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:45.175201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1210 05:48:45.175287   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:45.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:48:45.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:45.670195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:46.170498   51953 type.go:168] "Request Body" body=""
	I1210 05:48:46.170576   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:46.170876   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:46.670607   51953 type.go:168] "Request Body" body=""
	I1210 05:48:46.670701   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:46.671031   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:47.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:48:47.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:47.170154   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:47.669733   51953 type.go:168] "Request Body" body=""
	I1210 05:48:47.669806   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:47.670071   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:47.670117   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:48.169793   51953 type.go:168] "Request Body" body=""
	I1210 05:48:48.169879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:48.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:48.669835   51953 type.go:168] "Request Body" body=""
	I1210 05:48:48.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:48.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:49.170055   51953 type.go:168] "Request Body" body=""
	I1210 05:48:49.170124   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:49.170378   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:49.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:49.670235   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:49.670525   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:49.670571   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:49.930022   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:49.989791   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:49.993079   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:49.993114   51953 retry.go:31] will retry after 23.38662002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:50.170615   51953 type.go:168] "Request Body" body=""
	I1210 05:48:50.170692   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:50.171002   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:50.669688   51953 type.go:168] "Request Body" body=""
	I1210 05:48:50.669757   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:50.670060   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:51.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:48:51.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:51.170229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:51.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:48:51.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:51.670261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:52.169845   51953 type.go:168] "Request Body" body=""
	I1210 05:48:52.169924   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:52.170187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:52.170237   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:52.669804   51953 type.go:168] "Request Body" body=""
	I1210 05:48:52.669873   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:52.670187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:53.169870   51953 type.go:168] "Request Body" body=""
	I1210 05:48:53.169941   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:53.170273   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:53.669765   51953 type.go:168] "Request Body" body=""
	I1210 05:48:53.669863   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:53.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:54.169803   51953 type.go:168] "Request Body" body=""
	I1210 05:48:54.169877   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:54.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:54.170270   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:54.670065   51953 type.go:168] "Request Body" body=""
	I1210 05:48:54.670136   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:54.670470   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:55.169793   51953 type.go:168] "Request Body" body=""
	I1210 05:48:55.169876   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:55.170142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:55.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:48:55.669919   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:55.670247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:56.169832   51953 type.go:168] "Request Body" body=""
	I1210 05:48:56.169907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:56.170229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:56.669896   51953 type.go:168] "Request Body" body=""
	I1210 05:48:56.669967   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:56.670287   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:56.670338   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:57.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:48:57.169898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:57.170223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:57.669803   51953 type.go:168] "Request Body" body=""
	I1210 05:48:57.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:57.670238   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:58.169908   51953 type.go:168] "Request Body" body=""
	I1210 05:48:58.169985   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:58.170322   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:58.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:48:58.670098   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:58.670445   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:58.670497   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:59.170301   51953 type.go:168] "Request Body" body=""
	I1210 05:48:59.170378   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:59.170749   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:59.670557   51953 type.go:168] "Request Body" body=""
	I1210 05:48:59.670633   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:59.670949   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:00.169725   51953 type.go:168] "Request Body" body=""
	I1210 05:49:00.169813   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:00.170141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:00.670083   51953 type.go:168] "Request Body" body=""
	I1210 05:49:00.670159   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:00.670486   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:00.670533   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:01.169951   51953 type.go:168] "Request Body" body=""
	I1210 05:49:01.170038   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:01.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:01.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:01.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:01.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:02.169846   51953 type.go:168] "Request Body" body=""
	I1210 05:49:02.169918   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:02.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:02.669747   51953 type.go:168] "Request Body" body=""
	I1210 05:49:02.669829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:02.670143   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:03.169862   51953 type.go:168] "Request Body" body=""
	I1210 05:49:03.169937   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:03.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:03.170307   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:03.669983   51953 type.go:168] "Request Body" body=""
	I1210 05:49:03.670055   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:03.670401   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:04.169978   51953 type.go:168] "Request Body" body=""
	I1210 05:49:04.170070   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:04.170429   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:04.670184   51953 type.go:168] "Request Body" body=""
	I1210 05:49:04.670254   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:04.670541   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:05.169853   51953 type.go:168] "Request Body" body=""
	I1210 05:49:05.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:05.170261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:05.669805   51953 type.go:168] "Request Body" body=""
	I1210 05:49:05.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:05.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:05.670209   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:06.161707   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:49:06.170118   51953 type.go:168] "Request Body" body=""
	I1210 05:49:06.170187   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:06.170454   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:06.215983   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:06.219418   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:06.219449   51953 retry.go:31] will retry after 38.750779649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:06.669785   51953 type.go:168] "Request Body" body=""
	I1210 05:49:06.669865   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:06.670186   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:07.169875   51953 type.go:168] "Request Body" body=""
	I1210 05:49:07.169941   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:07.170192   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:07.669935   51953 type.go:168] "Request Body" body=""
	I1210 05:49:07.670005   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:07.670350   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:07.670403   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:08.170063   51953 type.go:168] "Request Body" body=""
	I1210 05:49:08.170142   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:08.170510   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:08.670188   51953 type.go:168] "Request Body" body=""
	I1210 05:49:08.670268   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:08.670583   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:09.170358   51953 type.go:168] "Request Body" body=""
	I1210 05:49:09.170435   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:09.170718   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:09.670114   51953 type.go:168] "Request Body" body=""
	I1210 05:49:09.670186   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:09.670501   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:09.670595   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:10.170242   51953 type.go:168] "Request Body" body=""
	I1210 05:49:10.170308   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:10.170650   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:10.670454   51953 type.go:168] "Request Body" body=""
	I1210 05:49:10.670533   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:10.670873   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:11.170681   51953 type.go:168] "Request Body" body=""
	I1210 05:49:11.170756   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:11.171117   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:11.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:49:11.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:11.670185   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:12.169855   51953 type.go:168] "Request Body" body=""
	I1210 05:49:12.169925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:12.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:12.170304   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:12.669856   51953 type.go:168] "Request Body" body=""
	I1210 05:49:12.669928   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:12.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:13.169860   51953 type.go:168] "Request Body" body=""
	I1210 05:49:13.169943   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:13.170217   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:13.380712   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:49:13.443508   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:13.443549   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:13.443568   51953 retry.go:31] will retry after 17.108062036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:13.669825   51953 type.go:168] "Request Body" body=""
	I1210 05:49:13.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:13.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:14.169973   51953 type.go:168] "Request Body" body=""
	I1210 05:49:14.170046   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:14.170360   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:14.170413   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:14.670243   51953 type.go:168] "Request Body" body=""
	I1210 05:49:14.670320   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:14.670588   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:15.170418   51953 type.go:168] "Request Body" body=""
	I1210 05:49:15.170487   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:15.170795   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:15.670586   51953 type.go:168] "Request Body" body=""
	I1210 05:49:15.670658   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:15.670975   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:16.170704   51953 type.go:168] "Request Body" body=""
	I1210 05:49:16.170776   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:16.171067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:16.171120   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:16.669813   51953 type.go:168] "Request Body" body=""
	I1210 05:49:16.669905   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:16.670255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:17.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:49:17.169899   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:17.170218   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:17.669769   51953 type.go:168] "Request Body" body=""
	I1210 05:49:17.669844   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:17.670143   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:18.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:49:18.169934   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:18.170269   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:18.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:49:18.670094   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:18.670415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:18.670472   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:19.170323   51953 type.go:168] "Request Body" body=""
	I1210 05:49:19.170395   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:19.170661   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:19.670601   51953 type.go:168] "Request Body" body=""
	I1210 05:49:19.670672   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:19.670993   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:20.169740   51953 type.go:168] "Request Body" body=""
	I1210 05:49:20.169817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:20.170150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:20.670516   51953 type.go:168] "Request Body" body=""
	I1210 05:49:20.670584   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:20.670897   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:20.670954   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:21.170713   51953 type.go:168] "Request Body" body=""
	I1210 05:49:21.170783   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:21.171082   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:21.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:49:21.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:21.670172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:22.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:49:22.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:22.170106   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:22.669822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:22.669894   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:22.670229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:23.169802   51953 type.go:168] "Request Body" body=""
	I1210 05:49:23.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:23.170200   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:23.170257   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:23.669758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:23.669829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:23.670132   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:24.169822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:24.169902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:24.170262   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:24.670129   51953 type.go:168] "Request Body" body=""
	I1210 05:49:24.670207   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:24.670559   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:25.170449   51953 type.go:168] "Request Body" body=""
	I1210 05:49:25.170521   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:25.170831   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:25.170881   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:25.670585   51953 type.go:168] "Request Body" body=""
	I1210 05:49:25.670666   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:25.671038   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:26.170684   51953 type.go:168] "Request Body" body=""
	I1210 05:49:26.170760   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:26.171104   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:26.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:49:26.669864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:26.670150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:27.169852   51953 type.go:168] "Request Body" body=""
	I1210 05:49:27.169930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:27.170272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:27.669984   51953 type.go:168] "Request Body" body=""
	I1210 05:49:27.670061   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:27.670384   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:27.670440   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:28.169751   51953 type.go:168] "Request Body" body=""
	I1210 05:49:28.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:28.170155   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:28.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:49:28.669874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:28.670210   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:29.170062   51953 type.go:168] "Request Body" body=""
	I1210 05:49:29.170136   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:29.170491   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:29.670199   51953 type.go:168] "Request Body" body=""
	I1210 05:49:29.670274   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:29.670550   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:29.670593   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:30.170374   51953 type.go:168] "Request Body" body=""
	I1210 05:49:30.170446   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:30.170838   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:30.552353   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:49:30.608474   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:30.608517   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:30.608604   51953 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:49:30.670690   51953 type.go:168] "Request Body" body=""
	I1210 05:49:30.670767   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:30.671090   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:31.169783   51953 type.go:168] "Request Body" body=""
	I1210 05:49:31.169860   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:31.170226   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:31.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:31.669889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:31.670241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:32.169940   51953 type.go:168] "Request Body" body=""
	I1210 05:49:32.170013   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:32.170338   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:32.170396   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:32.670045   51953 type.go:168] "Request Body" body=""
	I1210 05:49:32.670119   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:32.670396   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:33.169834   51953 type.go:168] "Request Body" body=""
	I1210 05:49:33.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:33.170309   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:33.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:33.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:33.670201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:34.169903   51953 type.go:168] "Request Body" body=""
	I1210 05:49:34.169973   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:34.170269   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:34.670193   51953 type.go:168] "Request Body" body=""
	I1210 05:49:34.670266   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:34.670601   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:34.670655   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:35.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:49:35.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:35.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:35.669756   51953 type.go:168] "Request Body" body=""
	I1210 05:49:35.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:35.670131   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:36.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:49:36.169901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:36.170223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:36.669946   51953 type.go:168] "Request Body" body=""
	I1210 05:49:36.670020   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:36.670367   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:37.170034   51953 type.go:168] "Request Body" body=""
	I1210 05:49:37.170108   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:37.170407   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:37.170461   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:37.669822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:37.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:37.670249   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:38.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:49:38.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:38.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:38.669935   51953 type.go:168] "Request Body" body=""
	I1210 05:49:38.670003   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:38.670313   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:39.170298   51953 type.go:168] "Request Body" body=""
	I1210 05:49:39.170373   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:39.170717   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:39.170771   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:39.670468   51953 type.go:168] "Request Body" body=""
	I1210 05:49:39.670545   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:39.670883   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:40.170669   51953 type.go:168] "Request Body" body=""
	I1210 05:49:40.170737   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:40.171069   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:40.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:49:40.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:40.670211   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:41.169813   51953 type.go:168] "Request Body" body=""
	I1210 05:49:41.169884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:41.170198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:41.669764   51953 type.go:168] "Request Body" body=""
	I1210 05:49:41.669859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:41.670152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:41.670193   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:42.169845   51953 type.go:168] "Request Body" body=""
	I1210 05:49:42.169948   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:42.170319   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:42.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:49:42.669885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:42.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:43.169758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:43.169831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:43.170096   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:43.669848   51953 type.go:168] "Request Body" body=""
	I1210 05:49:43.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:43.670267   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:43.670317   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:44.169816   51953 type.go:168] "Request Body" body=""
	I1210 05:49:44.169893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:44.170187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:44.670057   51953 type.go:168] "Request Body" body=""
	I1210 05:49:44.670140   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:44.670470   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:44.970959   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:49:45.060109   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:45.064226   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:45.064337   51953 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:49:45.067552   51953 out.go:179] * Enabled addons: 
	I1210 05:49:45.070225   51953 addons.go:530] duration metric: took 1m45.971891823s for enable addons: enabled=[]
	I1210 05:49:45.169999   51953 type.go:168] "Request Body" body=""
	I1210 05:49:45.170150   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:45.171110   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:45.669844   51953 type.go:168] "Request Body" body=""
	I1210 05:49:45.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:45.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:46.169931   51953 type.go:168] "Request Body" body=""
	I1210 05:49:46.170025   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:46.170316   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:46.170369   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:46.670055   51953 type.go:168] "Request Body" body=""
	I1210 05:49:46.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:46.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:47.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:49:47.169900   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:47.170277   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:47.669758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:47.669844   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:47.670170   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:48.169861   51953 type.go:168] "Request Body" body=""
	I1210 05:49:48.169933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:48.170293   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:48.669823   51953 type.go:168] "Request Body" body=""
	I1210 05:49:48.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:48.670185   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:48.670239   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:49.170189   51953 type.go:168] "Request Body" body=""
	I1210 05:49:49.170282   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:49.170581   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:49.670519   51953 type.go:168] "Request Body" body=""
	I1210 05:49:49.670591   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:49.670933   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:50.170751   51953 type.go:168] "Request Body" body=""
	I1210 05:49:50.170838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:50.171163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:50.669768   51953 type.go:168] "Request Body" body=""
	I1210 05:49:50.669842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:50.670163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:51.169874   51953 type.go:168] "Request Body" body=""
	I1210 05:49:51.169945   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:51.170294   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:51.170350   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:51.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:49:51.669925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:51.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:52.169785   51953 type.go:168] "Request Body" body=""
	I1210 05:49:52.169868   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:52.170166   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:52.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:49:52.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:52.670278   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:53.170002   51953 type.go:168] "Request Body" body=""
	I1210 05:49:53.170083   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:53.170428   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:53.170482   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:53.670134   51953 type.go:168] "Request Body" body=""
	I1210 05:49:53.670209   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:53.670537   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:54.170330   51953 type.go:168] "Request Body" body=""
	I1210 05:49:54.170403   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:54.170997   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:54.669762   51953 type.go:168] "Request Body" body=""
	I1210 05:49:54.669840   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:54.670157   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:55.170437   51953 type.go:168] "Request Body" body=""
	I1210 05:49:55.170508   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:55.170825   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:55.170879   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:55.670656   51953 type.go:168] "Request Body" body=""
	I1210 05:49:55.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:55.671067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:56.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:49:56.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:56.170163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:56.670631   51953 type.go:168] "Request Body" body=""
	I1210 05:49:56.670708   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:56.671047   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:57.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:49:57.169826   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:57.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:57.669853   51953 type.go:168] "Request Body" body=""
	I1210 05:49:57.669923   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:57.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:57.670309   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:58.169747   51953 type.go:168] "Request Body" body=""
	I1210 05:49:58.169817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:58.170122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:58.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:49:58.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:58.670275   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:59.170063   51953 type.go:168] "Request Body" body=""
	I1210 05:49:59.170156   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:59.170502   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:59.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:49:59.670792   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:59.671123   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:59.671171   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:00.169945   51953 type.go:168] "Request Body" body=""
	I1210 05:50:00.170054   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:00.170391   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:00.670293   51953 type.go:168] "Request Body" body=""
	I1210 05:50:00.670372   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:00.670734   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:01.170379   51953 type.go:168] "Request Body" body=""
	I1210 05:50:01.170445   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:01.170785   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:01.670657   51953 type.go:168] "Request Body" body=""
	I1210 05:50:01.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:01.671101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:02.169838   51953 type.go:168] "Request Body" body=""
	I1210 05:50:02.169916   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:02.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:02.170292   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:02.670641   51953 type.go:168] "Request Body" body=""
	I1210 05:50:02.670714   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:02.671049   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:03.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:50:03.169821   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:03.170173   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:03.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:03.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:03.670257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:04.169808   51953 type.go:168] "Request Body" body=""
	I1210 05:50:04.169878   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:04.170170   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:04.670153   51953 type.go:168] "Request Body" body=""
	I1210 05:50:04.670227   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:04.670558   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:04.670612   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:05.170389   51953 type.go:168] "Request Body" body=""
	I1210 05:50:05.170463   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:05.170790   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:05.670350   51953 type.go:168] "Request Body" body=""
	I1210 05:50:05.670419   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:05.670674   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:06.170479   51953 type.go:168] "Request Body" body=""
	I1210 05:50:06.170562   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:06.170930   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:06.670726   51953 type.go:168] "Request Body" body=""
	I1210 05:50:06.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:06.671141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:06.671199   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:07.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:50:07.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:07.170225   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:07.669823   51953 type.go:168] "Request Body" body=""
	I1210 05:50:07.669897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:07.670237   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:08.169822   51953 type.go:168] "Request Body" body=""
	I1210 05:50:08.169902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:08.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:08.669915   51953 type.go:168] "Request Body" body=""
	I1210 05:50:08.669997   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:08.670259   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:09.170295   51953 type.go:168] "Request Body" body=""
	I1210 05:50:09.170366   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:09.170686   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:09.170740   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:09.670199   51953 type.go:168] "Request Body" body=""
	I1210 05:50:09.670275   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:09.670611   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:10.170386   51953 type.go:168] "Request Body" body=""
	I1210 05:50:10.170464   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:10.170732   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:10.670493   51953 type.go:168] "Request Body" body=""
	I1210 05:50:10.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:10.670908   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:11.170688   51953 type.go:168] "Request Body" body=""
	I1210 05:50:11.170762   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:11.171109   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:11.171166   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:11.669753   51953 type.go:168] "Request Body" body=""
	I1210 05:50:11.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:11.670111   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:12.169788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:12.169865   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:12.170193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:12.669828   51953 type.go:168] "Request Body" body=""
	I1210 05:50:12.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:12.670272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:13.169792   51953 type.go:168] "Request Body" body=""
	I1210 05:50:13.169860   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:13.170133   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:13.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:50:13.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:13.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:13.670257   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:14.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:50:14.169893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:14.170243   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:14.670026   51953 type.go:168] "Request Body" body=""
	I1210 05:50:14.670100   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:14.670363   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:15.170050   51953 type.go:168] "Request Body" body=""
	I1210 05:50:15.170123   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:15.170471   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:15.670177   51953 type.go:168] "Request Body" body=""
	I1210 05:50:15.670250   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:15.670584   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:15.670636   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:16.170320   51953 type.go:168] "Request Body" body=""
	I1210 05:50:16.170389   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:16.170687   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:16.670498   51953 type.go:168] "Request Body" body=""
	I1210 05:50:16.670574   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:16.670936   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:17.170736   51953 type.go:168] "Request Body" body=""
	I1210 05:50:17.170817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:17.171164   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:17.670572   51953 type.go:168] "Request Body" body=""
	I1210 05:50:17.670637   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:17.670949   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:17.671005   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:18.169725   51953 type.go:168] "Request Body" body=""
	I1210 05:50:18.169795   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:18.170134   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:18.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:50:18.669953   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:18.670308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:19.170037   51953 type.go:168] "Request Body" body=""
	I1210 05:50:19.170104   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:19.170365   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:19.670201   51953 type.go:168] "Request Body" body=""
	I1210 05:50:19.670277   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:19.670610   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:20.170409   51953 type.go:168] "Request Body" body=""
	I1210 05:50:20.170484   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:20.170822   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:20.170877   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:20.670595   51953 type.go:168] "Request Body" body=""
	I1210 05:50:20.670666   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:20.670993   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:21.169731   51953 type.go:168] "Request Body" body=""
	I1210 05:50:21.169813   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:21.170125   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:21.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:21.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:21.670193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:22.170669   51953 type.go:168] "Request Body" body=""
	I1210 05:50:22.170747   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:22.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:22.171080   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:22.669728   51953 type.go:168] "Request Body" body=""
	I1210 05:50:22.669806   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:22.670127   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:23.169851   51953 type.go:168] "Request Body" body=""
	I1210 05:50:23.169933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:23.170302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:23.669972   51953 type.go:168] "Request Body" body=""
	I1210 05:50:23.670044   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:23.670358   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:24.170057   51953 type.go:168] "Request Body" body=""
	I1210 05:50:24.170129   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:24.170453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:24.670197   51953 type.go:168] "Request Body" body=""
	I1210 05:50:24.670272   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:24.670612   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:24.670670   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:25.170337   51953 type.go:168] "Request Body" body=""
	I1210 05:50:25.170410   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:25.170739   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:25.670502   51953 type.go:168] "Request Body" body=""
	I1210 05:50:25.670572   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:25.670902   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:26.170691   51953 type.go:168] "Request Body" body=""
	I1210 05:50:26.170764   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:26.171108   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:26.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:26.669867   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:26.670130   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:27.169817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:27.169897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:27.170246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:27.170300   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:27.669967   51953 type.go:168] "Request Body" body=""
	I1210 05:50:27.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:27.670392   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:28.169739   51953 type.go:168] "Request Body" body=""
	I1210 05:50:28.169822   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:28.170150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:28.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:50:28.669930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:28.670221   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:29.170249   51953 type.go:168] "Request Body" body=""
	I1210 05:50:29.170325   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:29.170644   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:29.170699   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:29.670163   51953 type.go:168] "Request Body" body=""
	I1210 05:50:29.670232   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:29.670555   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:30.170356   51953 type.go:168] "Request Body" body=""
	I1210 05:50:30.170428   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:30.170751   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:30.670538   51953 type.go:168] "Request Body" body=""
	I1210 05:50:30.670611   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:30.670921   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:31.170427   51953 type.go:168] "Request Body" body=""
	I1210 05:50:31.170500   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:31.170752   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:31.170791   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:31.670569   51953 type.go:168] "Request Body" body=""
	I1210 05:50:31.670653   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:31.670969   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:32.169710   51953 type.go:168] "Request Body" body=""
	I1210 05:50:32.169785   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:32.170083   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:32.670743   51953 type.go:168] "Request Body" body=""
	I1210 05:50:32.670820   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:32.671121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:33.169805   51953 type.go:168] "Request Body" body=""
	I1210 05:50:33.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:33.170255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:33.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:50:33.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:33.670229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:33.670285   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:34.169778   51953 type.go:168] "Request Body" body=""
	I1210 05:50:34.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:34.170189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:34.670104   51953 type.go:168] "Request Body" body=""
	I1210 05:50:34.670184   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:34.670511   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:35.169809   51953 type.go:168] "Request Body" body=""
	I1210 05:50:35.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:35.170193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:35.670544   51953 type.go:168] "Request Body" body=""
	I1210 05:50:35.670613   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:35.670878   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:35.670919   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:36.170712   51953 type.go:168] "Request Body" body=""
	I1210 05:50:36.170793   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:36.171084   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:36.669793   51953 type.go:168] "Request Body" body=""
	I1210 05:50:36.669873   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:36.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:37.169942   51953 type.go:168] "Request Body" body=""
	I1210 05:50:37.170016   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:37.170292   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:37.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:50:37.669902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:37.670220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:38.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:50:38.169910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:38.170283   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:38.170338   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:38.669845   51953 type.go:168] "Request Body" body=""
	I1210 05:50:38.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:38.670182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:39.170144   51953 type.go:168] "Request Body" body=""
	I1210 05:50:39.170220   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:39.170549   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:39.670142   51953 type.go:168] "Request Body" body=""
	I1210 05:50:39.670218   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:39.670527   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:40.170193   51953 type.go:168] "Request Body" body=""
	I1210 05:50:40.170274   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:40.170539   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:40.170603   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:40.670363   51953 type.go:168] "Request Body" body=""
	I1210 05:50:40.670438   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:40.670794   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:41.170587   51953 type.go:168] "Request Body" body=""
	I1210 05:50:41.170671   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:41.171005   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:41.669712   51953 type.go:168] "Request Body" body=""
	I1210 05:50:41.669800   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:41.670128   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:42.169860   51953 type.go:168] "Request Body" body=""
	I1210 05:50:42.169951   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:42.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:42.669840   51953 type.go:168] "Request Body" body=""
	I1210 05:50:42.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:42.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:42.670293   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:43.169767   51953 type.go:168] "Request Body" body=""
	I1210 05:50:43.169833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:43.170101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:43.669850   51953 type.go:168] "Request Body" body=""
	I1210 05:50:43.669921   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:43.670246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:44.169977   51953 type.go:168] "Request Body" body=""
	I1210 05:50:44.170071   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:44.170414   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:44.670140   51953 type.go:168] "Request Body" body=""
	I1210 05:50:44.670226   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:44.670613   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:44.670677   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:45.170475   51953 type.go:168] "Request Body" body=""
	I1210 05:50:45.170563   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:45.170891   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:45.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:50:45.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:45.670222   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:46.169767   51953 type.go:168] "Request Body" body=""
	I1210 05:50:46.169838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:46.170104   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:46.669827   51953 type.go:168] "Request Body" body=""
	I1210 05:50:46.669903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:46.670226   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:47.169875   51953 type.go:168] "Request Body" body=""
	I1210 05:50:47.169958   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:47.170385   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:47.170442   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:47.670688   51953 type.go:168] "Request Body" body=""
	I1210 05:50:47.670757   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:47.671081   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:48.169796   51953 type.go:168] "Request Body" body=""
	I1210 05:50:48.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:48.170206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:48.669926   51953 type.go:168] "Request Body" body=""
	I1210 05:50:48.670000   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:48.670320   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:49.170308   51953 type.go:168] "Request Body" body=""
	I1210 05:50:49.170376   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:49.170645   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:49.170686   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:49.670650   51953 type.go:168] "Request Body" body=""
	I1210 05:50:49.670726   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:49.671070   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:50.170763   51953 type.go:168] "Request Body" body=""
	I1210 05:50:50.170849   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:50.171249   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:50.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:50.669858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:50.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:51.169892   51953 type.go:168] "Request Body" body=""
	I1210 05:50:51.169972   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:51.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:51.670028   51953 type.go:168] "Request Body" body=""
	I1210 05:50:51.670106   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:51.670436   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:51.670489   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:52.170129   51953 type.go:168] "Request Body" body=""
	I1210 05:50:52.170197   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:52.170458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:52.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:50:52.669892   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:52.670248   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:53.169982   51953 type.go:168] "Request Body" body=""
	I1210 05:50:53.170060   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:53.170464   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:53.669749   51953 type.go:168] "Request Body" body=""
	I1210 05:50:53.669818   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:53.670085   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:54.169781   51953 type.go:168] "Request Body" body=""
	I1210 05:50:54.169858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:54.170182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:54.170236   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:54.669996   51953 type.go:168] "Request Body" body=""
	I1210 05:50:54.670086   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:54.670418   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:55.170104   51953 type.go:168] "Request Body" body=""
	I1210 05:50:55.170177   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:55.170449   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:55.669843   51953 type.go:168] "Request Body" body=""
	I1210 05:50:55.669923   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:55.670268   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:56.169975   51953 type.go:168] "Request Body" body=""
	I1210 05:50:56.170051   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:56.170388   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:56.170441   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:56.670095   51953 type.go:168] "Request Body" body=""
	I1210 05:50:56.670168   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:56.670482   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:57.169890   51953 type.go:168] "Request Body" body=""
	I1210 05:50:57.169984   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:57.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:57.669805   51953 type.go:168] "Request Body" body=""
	I1210 05:50:57.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:57.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:58.169774   51953 type.go:168] "Request Body" body=""
	I1210 05:50:58.169842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:58.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:58.669884   51953 type.go:168] "Request Body" body=""
	I1210 05:50:58.669959   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:58.670302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:58.670355   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:59.170179   51953 type.go:168] "Request Body" body=""
	I1210 05:50:59.170260   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:59.170621   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:59.670326   51953 type.go:168] "Request Body" body=""
	I1210 05:50:59.670400   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:59.670713   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:00.170597   51953 type.go:168] "Request Body" body=""
	I1210 05:51:00.170676   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:00.171062   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:00.670393   51953 type.go:168] "Request Body" body=""
	I1210 05:51:00.670470   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:00.670779   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:00.670859   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:01.170104   51953 type.go:168] "Request Body" body=""
	I1210 05:51:01.170176   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:01.170534   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:01.670409   51953 type.go:168] "Request Body" body=""
	I1210 05:51:01.670489   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:01.670793   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:02.170600   51953 type.go:168] "Request Body" body=""
	I1210 05:51:02.170681   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:02.171034   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:02.669707   51953 type.go:168] "Request Body" body=""
	I1210 05:51:02.669777   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:02.670050   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:03.169821   51953 type.go:168] "Request Body" body=""
	I1210 05:51:03.169928   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:03.170262   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:03.170336   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:03.669876   51953 type.go:168] "Request Body" body=""
	I1210 05:51:03.669952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:03.670257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:04.169761   51953 type.go:168] "Request Body" body=""
	I1210 05:51:04.169839   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:04.170116   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:04.670079   51953 type.go:168] "Request Body" body=""
	I1210 05:51:04.670153   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:04.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:05.170179   51953 type.go:168] "Request Body" body=""
	I1210 05:51:05.170252   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:05.170541   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:05.170585   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:05.670315   51953 type.go:168] "Request Body" body=""
	I1210 05:51:05.670404   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:05.670663   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:06.170465   51953 type.go:168] "Request Body" body=""
	I1210 05:51:06.170568   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:06.170894   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:06.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:51:06.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:06.671114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:07.169747   51953 type.go:168] "Request Body" body=""
	I1210 05:51:07.169823   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:07.170100   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:07.669844   51953 type.go:168] "Request Body" body=""
	I1210 05:51:07.669918   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:07.670193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:07.670235   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:08.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:51:08.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:08.170261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:08.669794   51953 type.go:168] "Request Body" body=""
	I1210 05:51:08.669861   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:08.670162   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:09.170071   51953 type.go:168] "Request Body" body=""
	I1210 05:51:09.170149   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:09.170495   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:09.670204   51953 type.go:168] "Request Body" body=""
	I1210 05:51:09.670276   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:09.670598   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:09.670653   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:10.169774   51953 type.go:168] "Request Body" body=""
	I1210 05:51:10.169872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:10.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:10.669859   51953 type.go:168] "Request Body" body=""
	I1210 05:51:10.669948   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:10.670307   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:11.169790   51953 type.go:168] "Request Body" body=""
	I1210 05:51:11.169861   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:11.170177   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:11.669711   51953 type.go:168] "Request Body" body=""
	I1210 05:51:11.669789   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:11.670102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:12.169688   51953 type.go:168] "Request Body" body=""
	I1210 05:51:12.169769   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:12.170083   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:12.170143   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:12.669821   51953 type.go:168] "Request Body" body=""
	I1210 05:51:12.669925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:12.670244   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:13.170587   51953 type.go:168] "Request Body" body=""
	I1210 05:51:13.170667   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:13.170931   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:13.670750   51953 type.go:168] "Request Body" body=""
	I1210 05:51:13.670830   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:13.671209   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:14.169828   51953 type.go:168] "Request Body" body=""
	I1210 05:51:14.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:14.170243   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:14.170295   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:14.670005   51953 type.go:168] "Request Body" body=""
	I1210 05:51:14.670076   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:14.670345   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:15.170019   51953 type.go:168] "Request Body" body=""
	I1210 05:51:15.170092   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:15.170405   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:15.670090   51953 type.go:168] "Request Body" body=""
	I1210 05:51:15.670164   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:15.670488   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:16.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:51:16.169830   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:16.170154   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:16.669884   51953 type.go:168] "Request Body" body=""
	I1210 05:51:16.669954   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:16.670321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:16.670378   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:17.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:51:17.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:17.170203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:17.669857   51953 type.go:168] "Request Body" body=""
	I1210 05:51:17.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:17.670182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:18.169839   51953 type.go:168] "Request Body" body=""
	I1210 05:51:18.169915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:18.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:18.669814   51953 type.go:168] "Request Body" body=""
	I1210 05:51:18.669894   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:18.670212   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:19.170018   51953 type.go:168] "Request Body" body=""
	I1210 05:51:19.170103   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:19.170385   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:19.170435   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:19.670213   51953 type.go:168] "Request Body" body=""
	I1210 05:51:19.670290   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:19.670634   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:20.170418   51953 type.go:168] "Request Body" body=""
	I1210 05:51:20.170505   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:20.170868   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:20.670496   51953 type.go:168] "Request Body" body=""
	I1210 05:51:20.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:20.670919   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:21.170726   51953 type.go:168] "Request Body" body=""
	I1210 05:51:21.170805   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:21.171135   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:21.171197   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:21.669812   51953 type.go:168] "Request Body" body=""
	I1210 05:51:21.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:21.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:22.169804   51953 type.go:168] "Request Body" body=""
	I1210 05:51:22.169880   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:22.170180   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:22.669830   51953 type.go:168] "Request Body" body=""
	I1210 05:51:22.669902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:22.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:23.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:51:23.169901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:23.170299   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:23.669871   51953 type.go:168] "Request Body" body=""
	I1210 05:51:23.669940   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:23.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:23.670238   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:24.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:51:24.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:24.170239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:24.670061   51953 type.go:168] "Request Body" body=""
	I1210 05:51:24.670134   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:24.670493   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:25.169972   51953 type.go:168] "Request Body" body=""
	I1210 05:51:25.170044   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:25.170325   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:25.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:51:25.669907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:25.670245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:25.670298   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:26.170334   51953 type.go:168] "Request Body" body=""
	I1210 05:51:26.170405   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:26.170720   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:26.670508   51953 type.go:168] "Request Body" body=""
	I1210 05:51:26.670574   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:26.670837   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:27.170610   51953 type.go:168] "Request Body" body=""
	I1210 05:51:27.170687   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:27.171034   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:27.670641   51953 type.go:168] "Request Body" body=""
	I1210 05:51:27.670716   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:27.671052   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:27.671107   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:28.170507   51953 type.go:168] "Request Body" body=""
	I1210 05:51:28.170587   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:28.170917   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:28.670721   51953 type.go:168] "Request Body" body=""
	I1210 05:51:28.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:28.671160   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:29.170214   51953 type.go:168] "Request Body" body=""
	I1210 05:51:29.170299   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:29.170609   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:29.670390   51953 type.go:168] "Request Body" body=""
	I1210 05:51:29.670459   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:29.670758   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:30.170562   51953 type.go:168] "Request Body" body=""
	I1210 05:51:30.170660   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:30.171075   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:30.171137   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:30.670749   51953 type.go:168] "Request Body" body=""
	I1210 05:51:30.670827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:30.671196   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:31.169761   51953 type.go:168] "Request Body" body=""
	I1210 05:51:31.169835   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:31.170191   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:31.669883   51953 type.go:168] "Request Body" body=""
	I1210 05:51:31.669961   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:31.670300   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:32.169880   51953 type.go:168] "Request Body" body=""
	I1210 05:51:32.169952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:32.170273   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:32.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:51:32.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:32.670089   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:32.670131   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:33.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:51:33.169851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:33.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:33.669798   51953 type.go:168] "Request Body" body=""
	I1210 05:51:33.669868   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:33.670181   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:34.169753   51953 type.go:168] "Request Body" body=""
	I1210 05:51:34.169825   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:34.170091   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:34.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:51:34.669907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:34.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:34.670277   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:35.169820   51953 type.go:168] "Request Body" body=""
	I1210 05:51:35.169944   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:35.170278   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:35.669951   51953 type.go:168] "Request Body" body=""
	I1210 05:51:35.670023   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:35.670279   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:36.169953   51953 type.go:168] "Request Body" body=""
	I1210 05:51:36.170025   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:36.170358   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:36.670063   51953 type.go:168] "Request Body" body=""
	I1210 05:51:36.670133   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:36.670464   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:36.670516   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:37.170022   51953 type.go:168] "Request Body" body=""
	I1210 05:51:37.170090   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:37.170409   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:37.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:51:37.669901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:37.670272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:38.169963   51953 type.go:168] "Request Body" body=""
	I1210 05:51:38.170040   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:38.170377   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:38.669747   51953 type.go:168] "Request Body" body=""
	I1210 05:51:38.669816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:38.670139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:39.170160   51953 type.go:168] "Request Body" body=""
	I1210 05:51:39.170230   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:39.170516   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:39.170556   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:39.670447   51953 type.go:168] "Request Body" body=""
	I1210 05:51:39.670519   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:39.670875   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:40.170718   51953 type.go:168] "Request Body" body=""
	I1210 05:51:40.170785   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:40.171078   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:40.670706   51953 type.go:168] "Request Body" body=""
	I1210 05:51:40.670781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:40.671102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:41.169776   51953 type.go:168] "Request Body" body=""
	I1210 05:51:41.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:41.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:41.670004   51953 type.go:168] "Request Body" body=""
	I1210 05:51:41.670161   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:41.670773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:41.670875   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:42.170771   51953 type.go:168] "Request Body" body=""
	I1210 05:51:42.170847   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:42.171213   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:42.669910   51953 type.go:168] "Request Body" body=""
	I1210 05:51:42.669982   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:42.670284   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:43.169986   51953 type.go:168] "Request Body" body=""
	I1210 05:51:43.170065   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:43.170327   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:43.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:51:43.669901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:43.670214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:44.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:51:44.169896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:44.170247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:44.170307   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:44.670143   51953 type.go:168] "Request Body" body=""
	I1210 05:51:44.670220   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:44.670489   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:45.170378   51953 type.go:168] "Request Body" body=""
	I1210 05:51:45.170522   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:45.171164   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:45.669904   51953 type.go:168] "Request Body" body=""
	I1210 05:51:45.669977   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:45.670266   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:46.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:51:46.169981   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:46.170247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:46.669982   51953 type.go:168] "Request Body" body=""
	I1210 05:51:46.670065   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:46.670412   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:46.670463   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:47.170121   51953 type.go:168] "Request Body" body=""
	I1210 05:51:47.170196   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:47.170526   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:47.670279   51953 type.go:168] "Request Body" body=""
	I1210 05:51:47.670353   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:47.670622   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:48.170397   51953 type.go:168] "Request Body" body=""
	I1210 05:51:48.170475   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:48.170792   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:48.670571   51953 type.go:168] "Request Body" body=""
	I1210 05:51:48.670649   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:48.670997   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:48.671073   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:49.170679   51953 type.go:168] "Request Body" body=""
	I1210 05:51:49.170753   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:49.171102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:49.670157   51953 type.go:168] "Request Body" body=""
	I1210 05:51:49.670228   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:49.670552   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:50.170360   51953 type.go:168] "Request Body" body=""
	I1210 05:51:50.170435   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:50.170752   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:50.670554   51953 type.go:168] "Request Body" body=""
	I1210 05:51:50.670636   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:50.670942   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:51.170729   51953 type.go:168] "Request Body" body=""
	I1210 05:51:51.170802   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:51.171139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:51.171187   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:51.669733   51953 type.go:168] "Request Body" body=""
	I1210 05:51:51.669807   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:51.670146   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:52.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:51:52.169929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:52.170284   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:52.669819   51953 type.go:168] "Request Body" body=""
	I1210 05:51:52.669893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:52.670207   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:53.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:51:53.169992   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:53.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:53.669991   51953 type.go:168] "Request Body" body=""
	I1210 05:51:53.670070   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:53.670340   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:53.670380   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:54.170031   51953 type.go:168] "Request Body" body=""
	I1210 05:51:54.170110   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:54.170441   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:54.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:51:54.670202   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:54.670529   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:55.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:51:55.169832   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:55.170177   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:55.669779   51953 type.go:168] "Request Body" body=""
	I1210 05:51:55.669851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:55.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:56.169901   51953 type.go:168] "Request Body" body=""
	I1210 05:51:56.169974   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:56.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:56.170373   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:56.669742   51953 type.go:168] "Request Body" body=""
	I1210 05:51:56.669816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:56.670103   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:57.169781   51953 type.go:168] "Request Body" body=""
	I1210 05:51:57.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:57.170181   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:57.669889   51953 type.go:168] "Request Body" body=""
	I1210 05:51:57.669965   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:57.670291   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:58.170689   51953 type.go:168] "Request Body" body=""
	I1210 05:51:58.170758   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:58.171080   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:58.171123   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:58.669796   51953 type.go:168] "Request Body" body=""
	I1210 05:51:58.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:58.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:59.170073   51953 type.go:168] "Request Body" body=""
	I1210 05:51:59.170150   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:59.170489   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:59.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:51:59.670202   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:59.670565   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:00.170445   51953 type.go:168] "Request Body" body=""
	I1210 05:52:00.170546   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:00.170880   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:00.669804   51953 type.go:168] "Request Body" body=""
	I1210 05:52:00.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:00.670208   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:00.670259   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:01.169772   51953 type.go:168] "Request Body" body=""
	I1210 05:52:01.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:01.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:01.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:52:01.670097   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:01.670433   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:02.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:52:02.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:02.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:02.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:52:02.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:02.670276   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:02.670355   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:03.170035   51953 type.go:168] "Request Body" body=""
	I1210 05:52:03.170108   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:03.170401   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:03.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:52:03.669890   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:03.670202   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:04.169758   51953 type.go:168] "Request Body" body=""
	I1210 05:52:04.169836   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:04.170117   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:04.670079   51953 type.go:168] "Request Body" body=""
	I1210 05:52:04.670164   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:04.670516   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:04.670563   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:05.169839   51953 type.go:168] "Request Body" body=""
	I1210 05:52:05.169935   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:05.170260   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:05.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:52:05.669823   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:05.670097   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:06.169800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:06.169870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:06.170195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:06.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:52:06.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:06.670297   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:07.169768   51953 type.go:168] "Request Body" body=""
	I1210 05:52:07.169841   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:07.170149   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:07.170196   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:07.669845   51953 type.go:168] "Request Body" body=""
	I1210 05:52:07.669915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:07.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:08.169973   51953 type.go:168] "Request Body" body=""
	I1210 05:52:08.170047   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:08.170399   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:08.670082   51953 type.go:168] "Request Body" body=""
	I1210 05:52:08.670165   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:08.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:09.170372   51953 type.go:168] "Request Body" body=""
	I1210 05:52:09.170444   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:09.170740   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:09.170790   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:09.670553   51953 type.go:168] "Request Body" body=""
	I1210 05:52:09.670631   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:09.670948   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:10.170667   51953 type.go:168] "Request Body" body=""
	I1210 05:52:10.170738   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:10.170996   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:10.669729   51953 type.go:168] "Request Body" body=""
	I1210 05:52:10.669805   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:10.670126   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:11.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:52:11.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:11.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:11.669912   51953 type.go:168] "Request Body" body=""
	I1210 05:52:11.669979   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:11.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:11.670280   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:12.169939   51953 type.go:168] "Request Body" body=""
	I1210 05:52:12.170014   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:12.170362   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:12.670078   51953 type.go:168] "Request Body" body=""
	I1210 05:52:12.670162   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:12.670514   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:13.169756   51953 type.go:168] "Request Body" body=""
	I1210 05:52:13.169834   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:13.170093   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:13.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:52:13.669896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:13.670227   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:14.169820   51953 type.go:168] "Request Body" body=""
	I1210 05:52:14.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:14.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:14.170294   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:14.670030   51953 type.go:168] "Request Body" body=""
	I1210 05:52:14.670095   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:14.670375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:15.170120   51953 type.go:168] "Request Body" body=""
	I1210 05:52:15.170196   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:15.170539   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:15.670302   51953 type.go:168] "Request Body" body=""
	I1210 05:52:15.670373   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:15.670676   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:16.170432   51953 type.go:168] "Request Body" body=""
	I1210 05:52:16.170507   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:16.170803   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:16.170857   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:16.670503   51953 type.go:168] "Request Body" body=""
	I1210 05:52:16.670576   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:16.670887   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:17.170709   51953 type.go:168] "Request Body" body=""
	I1210 05:52:17.170781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:17.171089   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:17.669750   51953 type.go:168] "Request Body" body=""
	I1210 05:52:17.669817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:17.670129   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:18.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:52:18.169909   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:18.170246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:18.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:52:18.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:18.670224   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:18.670276   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:19.170163   51953 type.go:168] "Request Body" body=""
	I1210 05:52:19.170242   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:19.170554   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:19.670487   51953 type.go:168] "Request Body" body=""
	I1210 05:52:19.670569   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:19.670973   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:20.169737   51953 type.go:168] "Request Body" body=""
	I1210 05:52:20.169824   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:20.170206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:20.669861   51953 type.go:168] "Request Body" body=""
	I1210 05:52:20.669938   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:20.670209   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:21.169833   51953 type.go:168] "Request Body" body=""
	I1210 05:52:21.169904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:21.170238   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:21.170290   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:21.669832   51953 type.go:168] "Request Body" body=""
	I1210 05:52:21.669911   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:21.670234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:22.169913   51953 type.go:168] "Request Body" body=""
	I1210 05:52:22.169983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:22.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:22.669812   51953 type.go:168] "Request Body" body=""
	I1210 05:52:22.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:22.670179   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:23.169847   51953 type.go:168] "Request Body" body=""
	I1210 05:52:23.169930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:23.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:23.669962   51953 type.go:168] "Request Body" body=""
	I1210 05:52:23.670037   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:23.670326   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:23.670367   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:24.170037   51953 type.go:168] "Request Body" body=""
	I1210 05:52:24.170109   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:24.170439   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:24.670168   51953 type.go:168] "Request Body" body=""
	I1210 05:52:24.670241   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:24.670573   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:25.170350   51953 type.go:168] "Request Body" body=""
	I1210 05:52:25.170421   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:25.170687   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:25.670431   51953 type.go:168] "Request Body" body=""
	I1210 05:52:25.670504   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:25.670821   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:25.670873   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:26.170481   51953 type.go:168] "Request Body" body=""
	I1210 05:52:26.170555   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:26.170912   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:26.670658   51953 type.go:168] "Request Body" body=""
	I1210 05:52:26.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:26.670998   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:27.169719   51953 type.go:168] "Request Body" body=""
	I1210 05:52:27.169797   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:27.170122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:27.669792   51953 type.go:168] "Request Body" body=""
	I1210 05:52:27.669870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:27.670184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:28.169778   51953 type.go:168] "Request Body" body=""
	I1210 05:52:28.169852   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:28.170172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:28.170229   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:28.669766   51953 type.go:168] "Request Body" body=""
	I1210 05:52:28.669838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:28.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:29.170045   51953 type.go:168] "Request Body" body=""
	I1210 05:52:29.170125   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:29.170415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:29.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:52:29.670193   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:29.670453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:30.170123   51953 type.go:168] "Request Body" body=""
	I1210 05:52:30.170199   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:30.170559   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:30.170635   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:30.670127   51953 type.go:168] "Request Body" body=""
	I1210 05:52:30.670200   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:30.670509   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:31.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:52:31.169839   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:31.170095   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:31.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:31.669875   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:31.670200   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:32.169800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:32.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:32.170198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:32.669769   51953 type.go:168] "Request Body" body=""
	I1210 05:52:32.669851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:32.670162   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:32.670212   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:33.169842   51953 type.go:168] "Request Body" body=""
	I1210 05:52:33.169915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:33.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:33.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:52:33.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:33.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:34.169925   51953 type.go:168] "Request Body" body=""
	I1210 05:52:34.170009   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:34.170331   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:34.670116   51953 type.go:168] "Request Body" body=""
	I1210 05:52:34.670194   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:34.670515   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:34.670571   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:35.170367   51953 type.go:168] "Request Body" body=""
	I1210 05:52:35.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:35.170782   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:35.670577   51953 type.go:168] "Request Body" body=""
	I1210 05:52:35.670647   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:35.670912   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:36.170722   51953 type.go:168] "Request Body" body=""
	I1210 05:52:36.170802   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:36.171183   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:36.669843   51953 type.go:168] "Request Body" body=""
	I1210 05:52:36.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:36.670291   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:37.170702   51953 type.go:168] "Request Body" body=""
	I1210 05:52:37.170771   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:37.171105   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:37.171165   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:37.669824   51953 type.go:168] "Request Body" body=""
	I1210 05:52:37.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:37.670242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:38.169838   51953 type.go:168] "Request Body" body=""
	I1210 05:52:38.169909   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:38.170276   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:38.669712   51953 type.go:168] "Request Body" body=""
	I1210 05:52:38.669779   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:38.670087   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:39.169961   51953 type.go:168] "Request Body" body=""
	I1210 05:52:39.170037   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:39.170366   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:39.670236   51953 type.go:168] "Request Body" body=""
	I1210 05:52:39.670306   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:39.670633   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:39.670687   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:40.170413   51953 type.go:168] "Request Body" body=""
	I1210 05:52:40.170482   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:40.170769   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:40.670553   51953 type.go:168] "Request Body" body=""
	I1210 05:52:40.670623   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:40.670995   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:41.169710   51953 type.go:168] "Request Body" body=""
	I1210 05:52:41.169781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:41.170119   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:41.669770   51953 type.go:168] "Request Body" body=""
	I1210 05:52:41.669843   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:41.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:42.169882   51953 type.go:168] "Request Body" body=""
	I1210 05:52:42.169973   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:42.170386   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:42.170462   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:42.670156   51953 type.go:168] "Request Body" body=""
	I1210 05:52:42.670236   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:42.670580   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:43.170323   51953 type.go:168] "Request Body" body=""
	I1210 05:52:43.170400   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:43.170660   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:43.670466   51953 type.go:168] "Request Body" body=""
	I1210 05:52:43.670547   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:43.670868   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:44.170674   51953 type.go:168] "Request Body" body=""
	I1210 05:52:44.170752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:44.171091   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:44.171150   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:44.670034   51953 type.go:168] "Request Body" body=""
	I1210 05:52:44.670107   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:44.670403   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:45.170340   51953 type.go:168] "Request Body" body=""
	I1210 05:52:45.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:45.170941   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:45.670006   51953 type.go:168] "Request Body" body=""
	I1210 05:52:45.670100   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:45.670436   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:46.169753   51953 type.go:168] "Request Body" body=""
	I1210 05:52:46.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:46.170099   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:46.669811   51953 type.go:168] "Request Body" body=""
	I1210 05:52:46.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:46.670235   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:46.670295   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:47.169851   51953 type.go:168] "Request Body" body=""
	I1210 05:52:47.169946   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:47.170312   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:47.669774   51953 type.go:168] "Request Body" body=""
	I1210 05:52:47.669840   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:47.670114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:48.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:52:48.169897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:48.170265   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:48.669972   51953 type.go:168] "Request Body" body=""
	I1210 05:52:48.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:48.670364   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:48.670428   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:49.170294   51953 type.go:168] "Request Body" body=""
	I1210 05:52:49.170370   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:49.170634   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:49.670676   51953 type.go:168] "Request Body" body=""
	I1210 05:52:49.670754   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:49.671078   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:50.169788   51953 type.go:168] "Request Body" body=""
	I1210 05:52:50.169866   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:50.170203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:50.669782   51953 type.go:168] "Request Body" body=""
	I1210 05:52:50.669858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:50.670155   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:51.169834   51953 type.go:168] "Request Body" body=""
	I1210 05:52:51.169907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:51.170263   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:51.170321   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:51.670020   51953 type.go:168] "Request Body" body=""
	I1210 05:52:51.670091   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:51.670371   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:52.170061   51953 type.go:168] "Request Body" body=""
	I1210 05:52:52.170132   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:52.170447   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:52.669850   51953 type.go:168] "Request Body" body=""
	I1210 05:52:52.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:52.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:53.169932   51953 type.go:168] "Request Body" body=""
	I1210 05:52:53.170009   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:53.170341   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:53.170397   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:53.670032   51953 type.go:168] "Request Body" body=""
	I1210 05:52:53.670105   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:53.670414   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:54.169881   51953 type.go:168] "Request Body" body=""
	I1210 05:52:54.169952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:54.170302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:54.670115   51953 type.go:168] "Request Body" body=""
	I1210 05:52:54.670186   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:54.670529   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:55.170256   51953 type.go:168] "Request Body" body=""
	I1210 05:52:55.170339   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:55.170657   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:55.170714   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:55.670524   51953 type.go:168] "Request Body" body=""
	I1210 05:52:55.670595   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:55.670950   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:56.170826   51953 type.go:168] "Request Body" body=""
	I1210 05:52:56.170903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:56.171240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:56.669759   51953 type.go:168] "Request Body" body=""
	I1210 05:52:56.669835   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:56.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:57.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:52:57.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:57.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:57.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:52:57.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:57.670255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:57.670314   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:58.170422   51953 type.go:168] "Request Body" body=""
	I1210 05:52:58.170495   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:58.170808   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:58.670565   51953 type.go:168] "Request Body" body=""
	I1210 05:52:58.670637   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:58.670958   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:59.170654   51953 type.go:168] "Request Body" body=""
	I1210 05:52:59.170728   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:59.171071   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:59.669939   51953 type.go:168] "Request Body" body=""
	I1210 05:52:59.670007   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:59.670301   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:59.670342   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:00.169895   51953 type.go:168] "Request Body" body=""
	I1210 05:53:00.169990   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:00.170313   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:00.669978   51953 type.go:168] "Request Body" body=""
	I1210 05:53:00.670066   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:00.670428   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:01.169993   51953 type.go:168] "Request Body" body=""
	I1210 05:53:01.170074   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:01.170371   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:01.669848   51953 type.go:168] "Request Body" body=""
	I1210 05:53:01.669929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:01.670400   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:01.670459   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:02.169969   51953 type.go:168] "Request Body" body=""
	I1210 05:53:02.170067   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:02.170453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:02.670163   51953 type.go:168] "Request Body" body=""
	I1210 05:53:02.670231   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:02.670544   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:03.170337   51953 type.go:168] "Request Body" body=""
	I1210 05:53:03.170414   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:03.170717   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:03.670398   51953 type.go:168] "Request Body" body=""
	I1210 05:53:03.670477   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:03.670784   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:03.670830   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:04.170568   51953 type.go:168] "Request Body" body=""
	I1210 05:53:04.170643   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:04.170962   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:04.669739   51953 type.go:168] "Request Body" body=""
	I1210 05:53:04.669820   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:04.670122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:05.169736   51953 type.go:168] "Request Body" body=""
	I1210 05:53:05.169812   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:05.170142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:05.669731   51953 type.go:168] "Request Body" body=""
	I1210 05:53:05.669801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:05.670088   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:06.169847   51953 type.go:168] "Request Body" body=""
	I1210 05:53:06.169919   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:06.170255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:06.170309   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:06.669839   51953 type.go:168] "Request Body" body=""
	I1210 05:53:06.669932   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:06.670258   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:07.169929   51953 type.go:168] "Request Body" body=""
	I1210 05:53:07.170019   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:07.170306   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:07.670053   51953 type.go:168] "Request Body" body=""
	I1210 05:53:07.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:07.670495   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:08.170204   51953 type.go:168] "Request Body" body=""
	I1210 05:53:08.170277   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:08.170612   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:08.170669   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:08.670407   51953 type.go:168] "Request Body" body=""
	I1210 05:53:08.670480   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:08.670802   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:09.170597   51953 type.go:168] "Request Body" body=""
	I1210 05:53:09.170672   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:09.171003   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:09.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:53:09.669899   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:09.670277   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:10.169910   51953 type.go:168] "Request Body" body=""
	I1210 05:53:10.169986   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:10.170270   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:10.669995   51953 type.go:168] "Request Body" body=""
	I1210 05:53:10.670072   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:10.670375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:10.670419   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:11.169900   51953 type.go:168] "Request Body" body=""
	I1210 05:53:11.169978   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:11.170309   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:11.669761   51953 type.go:168] "Request Body" body=""
	I1210 05:53:11.669842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:11.670160   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:12.169855   51953 type.go:168] "Request Body" body=""
	I1210 05:53:12.169929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:12.170311   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:12.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:53:12.669877   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:12.670203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:13.169791   51953 type.go:168] "Request Body" body=""
	I1210 05:53:13.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:13.170213   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:13.170268   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:13.669861   51953 type.go:168] "Request Body" body=""
	I1210 05:53:13.669935   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:13.670288   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:14.169996   51953 type.go:168] "Request Body" body=""
	I1210 05:53:14.170071   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:14.170413   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:14.670109   51953 type.go:168] "Request Body" body=""
	I1210 05:53:14.670182   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:14.670458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:15.169830   51953 type.go:168] "Request Body" body=""
	I1210 05:53:15.169954   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:15.170288   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:15.170343   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:15.669901   51953 type.go:168] "Request Body" body=""
	I1210 05:53:15.669977   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:15.670322   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:16.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:53:16.169825   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:16.170141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:16.669852   51953 type.go:168] "Request Body" body=""
	I1210 05:53:16.669933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:16.670259   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:17.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:53:17.169896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:17.170232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:17.669789   51953 type.go:168] "Request Body" body=""
	I1210 05:53:17.669855   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:17.670121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:17.670179   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:18.169880   51953 type.go:168] "Request Body" body=""
	I1210 05:53:18.169953   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:18.170308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:18.670029   51953 type.go:168] "Request Body" body=""
	I1210 05:53:18.670125   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:18.670459   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:19.170387   51953 type.go:168] "Request Body" body=""
	I1210 05:53:19.170452   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:19.170715   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:19.670679   51953 type.go:168] "Request Body" body=""
	I1210 05:53:19.670747   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:19.671074   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:19.671133   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:20.169842   51953 type.go:168] "Request Body" body=""
	I1210 05:53:20.169925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:20.170257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:20.669763   51953 type.go:168] "Request Body" body=""
	I1210 05:53:20.669856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:20.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:21.169837   51953 type.go:168] "Request Body" body=""
	I1210 05:53:21.169912   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:21.170234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:21.669987   51953 type.go:168] "Request Body" body=""
	I1210 05:53:21.670060   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:21.670390   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:22.170082   51953 type.go:168] "Request Body" body=""
	I1210 05:53:22.170158   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:22.170445   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:22.170499   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:22.669862   51953 type.go:168] "Request Body" body=""
	I1210 05:53:22.669943   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:22.670295   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:23.169959   51953 type.go:168] "Request Body" body=""
	I1210 05:53:23.170036   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:23.170370   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:23.669779   51953 type.go:168] "Request Body" body=""
	I1210 05:53:23.669847   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:23.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:24.169812   51953 type.go:168] "Request Body" body=""
	I1210 05:53:24.169882   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:24.170274   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:24.670128   51953 type.go:168] "Request Body" body=""
	I1210 05:53:24.670208   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:24.670549   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:24.670605   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:25.170307   51953 type.go:168] "Request Body" body=""
	I1210 05:53:25.170382   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:25.170719   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:25.670478   51953 type.go:168] "Request Body" body=""
	I1210 05:53:25.670547   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:25.670828   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:26.170599   51953 type.go:168] "Request Body" body=""
	I1210 05:53:26.170671   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:26.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:26.669709   51953 type.go:168] "Request Body" body=""
	I1210 05:53:26.669782   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:26.670054   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:27.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:53:27.169831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:27.170139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:27.170198   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:27.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:53:27.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:27.670219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:28.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:53:28.169828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:28.170132   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:28.669819   51953 type.go:168] "Request Body" body=""
	I1210 05:53:28.669888   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:28.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:29.170163   51953 type.go:168] "Request Body" body=""
	I1210 05:53:29.170243   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:29.170572   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:29.170631   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:29.670270   51953 type.go:168] "Request Body" body=""
	I1210 05:53:29.670337   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:29.670607   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:30.170496   51953 type.go:168] "Request Body" body=""
	I1210 05:53:30.170584   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:30.170947   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:30.670768   51953 type.go:168] "Request Body" body=""
	I1210 05:53:30.670854   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:30.671206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:31.169798   51953 type.go:168] "Request Body" body=""
	I1210 05:53:31.169872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:31.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:31.669871   51953 type.go:168] "Request Body" body=""
	I1210 05:53:31.669951   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:31.670321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:31.670376   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:32.169884   51953 type.go:168] "Request Body" body=""
	I1210 05:53:32.169959   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:32.170251   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:32.669923   51953 type.go:168] "Request Body" body=""
	I1210 05:53:32.669991   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:32.670335   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:33.169817   51953 type.go:168] "Request Body" body=""
	I1210 05:53:33.169889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:33.170220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:33.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:53:33.669885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:33.670198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:34.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:53:34.169833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:34.170101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:34.170150   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:34.670055   51953 type.go:168] "Request Body" body=""
	I1210 05:53:34.670124   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:34.670458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:35.170164   51953 type.go:168] "Request Body" body=""
	I1210 05:53:35.170239   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:35.170615   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:35.670406   51953 type.go:168] "Request Body" body=""
	I1210 05:53:35.670480   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:35.670747   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:36.170521   51953 type.go:168] "Request Body" body=""
	I1210 05:53:36.170600   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:36.170924   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:36.170976   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:36.670598   51953 type.go:168] "Request Body" body=""
	I1210 05:53:36.670673   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:36.671006   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:37.170525   51953 type.go:168] "Request Body" body=""
	I1210 05:53:37.170598   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:37.170929   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:37.670698   51953 type.go:168] "Request Body" body=""
	I1210 05:53:37.670771   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:37.671111   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:38.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:53:38.169821   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:38.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:38.670414   51953 type.go:168] "Request Body" body=""
	I1210 05:53:38.670482   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:38.670791   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:38.670843   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:39.170611   51953 type.go:168] "Request Body" body=""
	I1210 05:53:39.170682   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:39.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:39.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:53:39.669833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:39.670145   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:40.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:53:40.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:40.170087   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:40.669801   51953 type.go:168] "Request Body" body=""
	I1210 05:53:40.669881   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:40.670219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:41.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:53:41.169995   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:41.170355   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:41.170412   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:41.670056   51953 type.go:168] "Request Body" body=""
	I1210 05:53:41.670122   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:41.670440   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:42.169858   51953 type.go:168] "Request Body" body=""
	I1210 05:53:42.169947   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:42.170336   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:42.670088   51953 type.go:168] "Request Body" body=""
	I1210 05:53:42.670163   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:42.670484   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:43.170162   51953 type.go:168] "Request Body" body=""
	I1210 05:53:43.170230   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:43.170547   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:43.170609   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:43.670381   51953 type.go:168] "Request Body" body=""
	I1210 05:53:43.670459   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:43.670797   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:44.170478   51953 type.go:168] "Request Body" body=""
	I1210 05:53:44.170553   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:44.170917   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:44.670710   51953 type.go:168] "Request Body" body=""
	I1210 05:53:44.670781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:44.671096   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:45.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:53:45.169927   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:45.170248   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:45.670167   51953 type.go:168] "Request Body" body=""
	I1210 05:53:45.670243   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:45.670596   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:45.670654   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:46.170356   51953 type.go:168] "Request Body" body=""
	I1210 05:53:46.170470   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:46.170775   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:46.670631   51953 type.go:168] "Request Body" body=""
	I1210 05:53:46.670706   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:46.671056   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:47.169777   51953 type.go:168] "Request Body" body=""
	I1210 05:53:47.169864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:47.170180   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:47.670484   51953 type.go:168] "Request Body" body=""
	I1210 05:53:47.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:47.670850   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:47.670896   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:48.170703   51953 type.go:168] "Request Body" body=""
	I1210 05:53:48.170773   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:48.171186   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:48.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:53:48.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:48.670270   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:49.170239   51953 type.go:168] "Request Body" body=""
	I1210 05:53:49.170314   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:49.170632   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:49.670158   51953 type.go:168] "Request Body" body=""
	I1210 05:53:49.670250   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:49.670638   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:50.170456   51953 type.go:168] "Request Body" body=""
	I1210 05:53:50.170536   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:50.170897   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:50.170949   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:50.670681   51953 type.go:168] "Request Body" body=""
	I1210 05:53:50.670750   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:50.671080   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:51.169790   51953 type.go:168] "Request Body" body=""
	I1210 05:53:51.169870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:51.170201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:51.669911   51953 type.go:168] "Request Body" body=""
	I1210 05:53:51.669983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:51.670289   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:52.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:53:52.169885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:52.170158   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:52.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:53:52.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:52.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:52.670299   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:53.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:53:53.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:53.170220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:53.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:53:53.669864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:53.670142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:54.169882   51953 type.go:168] "Request Body" body=""
	I1210 05:53:54.169960   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:54.170294   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:54.670138   51953 type.go:168] "Request Body" body=""
	I1210 05:53:54.670217   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:54.670514   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:54.670556   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:55.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:53:55.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:55.170092   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:55.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:53:55.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:55.670235   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:56.169933   51953 type.go:168] "Request Body" body=""
	I1210 05:53:56.170005   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:56.170326   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:56.669987   51953 type.go:168] "Request Body" body=""
	I1210 05:53:56.670052   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:56.670317   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:57.170000   51953 type.go:168] "Request Body" body=""
	I1210 05:53:57.170105   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:57.170463   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:57.170520   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:57.670190   51953 type.go:168] "Request Body" body=""
	I1210 05:53:57.670263   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:57.670595   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:58.170369   51953 type.go:168] "Request Body" body=""
	I1210 05:53:58.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:58.170773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:58.670583   51953 type.go:168] "Request Body" body=""
	I1210 05:53:58.670669   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:58.671047   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:59.170051   51953 type.go:168] "Request Body" body=""
	I1210 05:53:59.170137   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:59.170479   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:59.170549   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:59.669763   51953 type.go:168] "Request Body" body=""
	I1210 05:53:59.669831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:59.670493   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:54:00.169895   51953 type.go:168] "Request Body" body=""
	I1210 05:54:00.170210   51953 node_ready.go:38] duration metric: took 6m0.000621671s for node "functional-644034" to be "Ready" ...
	I1210 05:54:00.173449   51953 out.go:203] 
	W1210 05:54:00.176680   51953 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 05:54:00.176713   51953 out.go:285] * 
	W1210 05:54:00.178858   51953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:54:00.215003   51953 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506429728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506448067Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506489134Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506504420Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506514545Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506527788Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506537643Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506548720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506564646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506593873Z" level=info msg="Connect containerd service"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506912364Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.507519251Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.527118026Z" level=info msg="Start subscribing containerd event"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.527202514Z" level=info msg="Start recovering state"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.530801717Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.530884449Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549808867Z" level=info msg="Start event monitor"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549866450Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549876329Z" level=info msg="Start streaming server"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549885511Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549893150Z" level=info msg="runtime interface starting up..."
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549899164Z" level=info msg="starting plugins..."
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549910865Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 05:47:57 functional-644034 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.551142386Z" level=info msg="containerd successfully booted in 0.065614s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:54:02.425638    9029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:02.426141    9029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:02.428128    9029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:02.428958    9029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:02.430885    9029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 05:54:02 up 36 min,  0 user,  load average: 0.28, 0.36, 0.58
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 05:53:58 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:53:59 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 10 05:53:59 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:53:59 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:53:59 functional-644034 kubelet[8912]: E1210 05:53:59.707812    8912 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:53:59 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:53:59 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:00 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 10 05:54:00 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:00 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:00 functional-644034 kubelet[8918]: E1210 05:54:00.474883    8918 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:00 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:00 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:01 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 10 05:54:01 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:01 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:01 functional-644034 kubelet[8931]: E1210 05:54:01.240942    8931 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:01 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:01 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:01 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 10 05:54:01 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:01 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:01 functional-644034 kubelet[8945]: E1210 05:54:01.979120    8945 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:01 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:01 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (411.350355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (368.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-644034 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-644034 get po -A: exit status 1 (59.4276ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-644034 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-644034 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-644034 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 2 (314.657853ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-944360 image load --daemon kicbase/echo-server:functional-944360 --alsologtostderr                                                                   │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh            │ functional-944360 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh            │ functional-944360 ssh sudo cat /etc/test/nested/copy/4116/hosts                                                                                                 │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image load --daemon kicbase/echo-server:functional-944360 --alsologtostderr                                                                   │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ update-context │ functional-944360 update-context --alsologtostderr -v=2                                                                                                         │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ update-context │ functional-944360 update-context --alsologtostderr -v=2                                                                                                         │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image save kicbase/echo-server:functional-944360 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ update-context │ functional-944360 update-context --alsologtostderr -v=2                                                                                                         │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image rm kicbase/echo-server:functional-944360 --alsologtostderr                                                                              │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image save --daemon kicbase/echo-server:functional-944360 --alsologtostderr                                                                   │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls --format short --alsologtostderr                                                                                                     │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls --format yaml --alsologtostderr                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh            │ functional-944360 ssh pgrep buildkitd                                                                                                                           │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ image          │ functional-944360 image ls --format json --alsologtostderr                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls --format table --alsologtostderr                                                                                                     │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image build -t localhost/my-image:functional-944360 testdata/build --alsologtostderr                                                          │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image          │ functional-944360 image ls                                                                                                                                      │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ delete         │ -p functional-944360                                                                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ start          │ -p functional-644034 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ start          │ -p functional-644034 --alsologtostderr -v=8                                                                                                                     │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:47:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:47:54.556574   51953 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:47:54.556774   51953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:54.556804   51953 out.go:374] Setting ErrFile to fd 2...
	I1210 05:47:54.556824   51953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:54.557680   51953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:47:54.558123   51953 out.go:368] Setting JSON to false
	I1210 05:47:54.558985   51953 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1825,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:47:54.559094   51953 start.go:143] virtualization:  
	I1210 05:47:54.562634   51953 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:47:54.566518   51953 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:47:54.566592   51953 notify.go:221] Checking for updates...
	I1210 05:47:54.572379   51953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:47:54.575335   51953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:54.578363   51953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:47:54.581210   51953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:47:54.584186   51953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:47:54.587618   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:54.587759   51953 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:47:54.618368   51953 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:47:54.618493   51953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:47:54.683662   51953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 05:47:54.67215006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:47:54.683767   51953 docker.go:319] overlay module found
	I1210 05:47:54.686996   51953 out.go:179] * Using the docker driver based on existing profile
	I1210 05:47:54.689865   51953 start.go:309] selected driver: docker
	I1210 05:47:54.689883   51953 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:54.689998   51953 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:47:54.690096   51953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:47:54.769093   51953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 05:47:54.760185758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:47:54.769542   51953 cni.go:84] Creating CNI manager for ""
	I1210 05:47:54.769597   51953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:47:54.769652   51953 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:54.772754   51953 out.go:179] * Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	I1210 05:47:54.775504   51953 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 05:47:54.778330   51953 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:47:54.781109   51953 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:47:54.781186   51953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:47:54.800171   51953 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:47:54.800192   51953 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 05:47:54.839003   51953 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 05:47:55.003206   51953 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 05:47:55.003455   51953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json ...
	I1210 05:47:55.003769   51953 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:47:55.003826   51953 start.go:360] acquireMachinesLock for functional-644034: {Name:mk0dde6f976baac8ab90670ad27c806ab702c4c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.003903   51953 start.go:364] duration metric: took 49.001µs to acquireMachinesLock for "functional-644034"
	I1210 05:47:55.003933   51953 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:47:55.003940   51953 fix.go:54] fixHost starting: 
	I1210 05:47:55.004094   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.004258   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:55.028659   51953 fix.go:112] recreateIfNeeded on functional-644034: state=Running err=<nil>
	W1210 05:47:55.028694   51953 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:47:55.031932   51953 out.go:252] * Updating the running docker "functional-644034" container ...
	I1210 05:47:55.031977   51953 machine.go:94] provisionDockerMachine start ...
	I1210 05:47:55.032062   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.055133   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.055465   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.055479   51953 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:47:55.170848   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.207999   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:47:55.208023   51953 ubuntu.go:182] provisioning hostname "functional-644034"
	I1210 05:47:55.208102   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.228767   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.229073   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.229085   51953 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-644034 && echo "functional-644034" | sudo tee /etc/hostname
	I1210 05:47:55.357858   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.390746   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:47:55.390831   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.434495   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.434811   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.434828   51953 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-644034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644034/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-644034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:47:55.523319   51953 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523359   51953 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523419   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:47:55.523430   51953 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 131.759µs
	I1210 05:47:55.523435   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 05:47:55.523445   51953 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 87.246µs
	I1210 05:47:55.523453   51953 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523438   51953 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:47:55.523449   51953 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523467   51953 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523481   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:47:55.523488   51953 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.262µs
	I1210 05:47:55.523494   51953 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:47:55.523503   51953 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523523   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 05:47:55.523531   51953 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 65.428µs
	I1210 05:47:55.523538   51953 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523542   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 05:47:55.523548   51953 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 45.473µs
	I1210 05:47:55.523554   51953 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 05:47:55.523548   51953 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523565   51953 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523317   51953 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523599   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 05:47:55.523607   51953 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.7µs
	I1210 05:47:55.523610   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 05:47:55.523613   51953 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 05:47:55.523600   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 05:47:55.523617   51953 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 70.203µs
	I1210 05:47:55.523622   51953 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 325.49µs
	I1210 05:47:55.523626   51953 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523628   51953 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523644   51953 cache.go:87] Successfully saved all images to host disk.
	I1210 05:47:55.587205   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:47:55.587232   51953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 05:47:55.587288   51953 ubuntu.go:190] setting up certificates
	I1210 05:47:55.587298   51953 provision.go:84] configureAuth start
	I1210 05:47:55.587369   51953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:47:55.604738   51953 provision.go:143] copyHostCerts
	I1210 05:47:55.604778   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:47:55.604816   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 05:47:55.604828   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:47:55.604905   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 05:47:55.605000   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:47:55.605022   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 05:47:55.605029   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:47:55.605061   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 05:47:55.605114   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:47:55.605134   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 05:47:55.605139   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:47:55.605169   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 05:47:55.605233   51953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.functional-644034 san=[127.0.0.1 192.168.49.2 functional-644034 localhost minikube]
	I1210 05:47:55.781276   51953 provision.go:177] copyRemoteCerts
	I1210 05:47:55.781365   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:47:55.781432   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.797956   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:55.902711   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 05:47:55.902771   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:47:55.919779   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 05:47:55.919840   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:47:55.936935   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 05:47:55.936994   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:47:55.953689   51953 provision.go:87] duration metric: took 366.363656ms to configureAuth
	I1210 05:47:55.953721   51953 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:47:55.953915   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:55.953927   51953 machine.go:97] duration metric: took 921.944178ms to provisionDockerMachine
	I1210 05:47:55.953936   51953 start.go:293] postStartSetup for "functional-644034" (driver="docker")
	I1210 05:47:55.953952   51953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:47:55.954004   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:47:55.954054   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.971130   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.075277   51953 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:47:56.078673   51953 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 05:47:56.078694   51953 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 05:47:56.078699   51953 command_runner.go:130] > VERSION_ID="12"
	I1210 05:47:56.078704   51953 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 05:47:56.078708   51953 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 05:47:56.078712   51953 command_runner.go:130] > ID=debian
	I1210 05:47:56.078717   51953 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 05:47:56.078725   51953 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 05:47:56.078732   51953 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 05:47:56.078800   51953 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:47:56.078828   51953 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:47:56.078840   51953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 05:47:56.078899   51953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 05:47:56.078986   51953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 05:47:56.078998   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1210 05:47:56.079103   51953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> hosts in /etc/test/nested/copy/4116
	I1210 05:47:56.079112   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> /etc/test/nested/copy/4116/hosts
	I1210 05:47:56.079156   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4116
	I1210 05:47:56.086554   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:47:56.104005   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts --> /etc/test/nested/copy/4116/hosts (40 bytes)
	I1210 05:47:56.121596   51953 start.go:296] duration metric: took 167.644644ms for postStartSetup
	I1210 05:47:56.121686   51953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:47:56.121728   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.138924   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.243468   51953 command_runner.go:130] > 14%
	I1210 05:47:56.243960   51953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:47:56.248281   51953 command_runner.go:130] > 169G
	I1210 05:47:56.248748   51953 fix.go:56] duration metric: took 1.244804723s for fixHost
	I1210 05:47:56.248771   51953 start.go:83] releasing machines lock for "functional-644034", held for 1.24485909s
	I1210 05:47:56.248837   51953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:47:56.266070   51953 ssh_runner.go:195] Run: cat /version.json
	I1210 05:47:56.266123   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.266146   51953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:47:56.266199   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.283872   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.284272   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.472387   51953 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 05:47:56.475023   51953 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 05:47:56.475222   51953 ssh_runner.go:195] Run: systemctl --version
	I1210 05:47:56.481051   51953 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 05:47:56.481144   51953 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 05:47:56.481557   51953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 05:47:56.485740   51953 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 05:47:56.485802   51953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:47:56.485889   51953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:47:56.493391   51953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:47:56.493413   51953 start.go:496] detecting cgroup driver to use...
	I1210 05:47:56.493443   51953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:47:56.493499   51953 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 05:47:56.508720   51953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:47:56.521711   51953 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:47:56.521777   51953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:47:56.537527   51953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:47:56.551315   51953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:47:56.656595   51953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:47:56.765354   51953 docker.go:234] disabling docker service ...
	I1210 05:47:56.765422   51953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:47:56.780352   51953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:47:56.793570   51953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:47:56.900961   51953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:47:57.025824   51953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:47:57.039104   51953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:47:57.052658   51953 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 05:47:57.053978   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.213891   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:47:57.223164   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 05:47:57.232001   51953 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:47:57.232070   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:47:57.240776   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:47:57.249302   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:47:57.258094   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:47:57.266381   51953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:47:57.274230   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:47:57.282766   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:47:57.291675   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:47:57.300542   51953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:47:57.307150   51953 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 05:47:57.308059   51953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:47:57.315237   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:57.433904   51953 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:47:57.552794   51953 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 05:47:57.552901   51953 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 05:47:57.556769   51953 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1210 05:47:57.556839   51953 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 05:47:57.556861   51953 command_runner.go:130] > Device: 0,73	Inode: 1614        Links: 1
	I1210 05:47:57.556893   51953 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:47:57.556921   51953 command_runner.go:130] > Access: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.556947   51953 command_runner.go:130] > Modify: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.556968   51953 command_runner.go:130] > Change: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.557011   51953 command_runner.go:130] >  Birth: -
	I1210 05:47:57.557078   51953 start.go:564] Will wait 60s for crictl version
	I1210 05:47:57.557155   51953 ssh_runner.go:195] Run: which crictl
	I1210 05:47:57.560538   51953 command_runner.go:130] > /usr/local/bin/crictl
	I1210 05:47:57.560706   51953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:47:57.582482   51953 command_runner.go:130] > Version:  0.1.0
	I1210 05:47:57.582585   51953 command_runner.go:130] > RuntimeName:  containerd
	I1210 05:47:57.582609   51953 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1210 05:47:57.582715   51953 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 05:47:57.584523   51953 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 05:47:57.584650   51953 ssh_runner.go:195] Run: containerd --version
	I1210 05:47:57.601892   51953 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 05:47:57.603507   51953 ssh_runner.go:195] Run: containerd --version
	I1210 05:47:57.622429   51953 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 05:47:57.630007   51953 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 05:47:57.632949   51953 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:47:57.648626   51953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:47:57.652604   51953 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 05:47:57.652711   51953 kubeadm.go:884] updating cluster {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:47:57.652889   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.820648   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.971830   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:58.124406   51953 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:47:58.124495   51953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:47:58.146688   51953 command_runner.go:130] > {
	I1210 05:47:58.146710   51953 command_runner.go:130] >   "images":  [
	I1210 05:47:58.146724   51953 command_runner.go:130] >     {
	I1210 05:47:58.146735   51953 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1210 05:47:58.146741   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146747   51953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 05:47:58.146750   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146755   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146765   51953 command_runner.go:130] >       "size":  "8032639",
	I1210 05:47:58.146779   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146784   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146790   51953 command_runner.go:130] >     },
	I1210 05:47:58.146794   51953 command_runner.go:130] >     {
	I1210 05:47:58.146801   51953 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 05:47:58.146808   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146813   51953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 05:47:58.146817   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146821   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146830   51953 command_runner.go:130] >       "size":  "21166088",
	I1210 05:47:58.146837   51953 command_runner.go:130] >       "username":  "nonroot",
	I1210 05:47:58.146841   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146844   51953 command_runner.go:130] >     },
	I1210 05:47:58.146847   51953 command_runner.go:130] >     {
	I1210 05:47:58.146855   51953 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1210 05:47:58.146861   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146867   51953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1210 05:47:58.146873   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146878   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146885   51953 command_runner.go:130] >       "size":  "21748497",
	I1210 05:47:58.146888   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.146897   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.146904   51953 command_runner.go:130] >       },
	I1210 05:47:58.146908   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146912   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146917   51953 command_runner.go:130] >     },
	I1210 05:47:58.146925   51953 command_runner.go:130] >     {
	I1210 05:47:58.146933   51953 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1210 05:47:58.146939   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146948   51953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1210 05:47:58.146955   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146959   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146964   51953 command_runner.go:130] >       "size":  "24690149",
	I1210 05:47:58.146967   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.146972   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.146975   51953 command_runner.go:130] >       },
	I1210 05:47:58.146979   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146985   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146990   51953 command_runner.go:130] >     },
	I1210 05:47:58.146996   51953 command_runner.go:130] >     {
	I1210 05:47:58.147003   51953 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1210 05:47:58.147007   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147030   51953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1210 05:47:58.147034   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147038   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147042   51953 command_runner.go:130] >       "size":  "20670083",
	I1210 05:47:58.147046   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147050   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.147056   51953 command_runner.go:130] >       },
	I1210 05:47:58.147060   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147067   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147070   51953 command_runner.go:130] >     },
	I1210 05:47:58.147081   51953 command_runner.go:130] >     {
	I1210 05:47:58.147088   51953 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1210 05:47:58.147092   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147099   51953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1210 05:47:58.147103   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147107   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147111   51953 command_runner.go:130] >       "size":  "22430795",
	I1210 05:47:58.147122   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147127   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147132   51953 command_runner.go:130] >     },
	I1210 05:47:58.147135   51953 command_runner.go:130] >     {
	I1210 05:47:58.147144   51953 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1210 05:47:58.147150   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147155   51953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1210 05:47:58.147161   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147173   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147180   51953 command_runner.go:130] >       "size":  "15403461",
	I1210 05:47:58.147183   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147187   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.147190   51953 command_runner.go:130] >       },
	I1210 05:47:58.147194   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147198   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147205   51953 command_runner.go:130] >     },
	I1210 05:47:58.147208   51953 command_runner.go:130] >     {
	I1210 05:47:58.147215   51953 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 05:47:58.147221   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147226   51953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 05:47:58.147232   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147236   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147248   51953 command_runner.go:130] >       "size":  "265458",
	I1210 05:47:58.147252   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147256   51953 command_runner.go:130] >         "value":  "65535"
	I1210 05:47:58.147259   51953 command_runner.go:130] >       },
	I1210 05:47:58.147270   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147274   51953 command_runner.go:130] >       "pinned":  true
	I1210 05:47:58.147277   51953 command_runner.go:130] >     }
	I1210 05:47:58.147282   51953 command_runner.go:130] >   ]
	I1210 05:47:58.147284   51953 command_runner.go:130] > }
	I1210 05:47:58.149521   51953 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 05:47:58.149540   51953 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:47:58.149552   51953 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1210 05:47:58.149645   51953 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:47:58.149706   51953 ssh_runner.go:195] Run: sudo crictl info
	I1210 05:47:58.176587   51953 command_runner.go:130] > {
	I1210 05:47:58.176610   51953 command_runner.go:130] >   "cniconfig": {
	I1210 05:47:58.176616   51953 command_runner.go:130] >     "Networks": [
	I1210 05:47:58.176620   51953 command_runner.go:130] >       {
	I1210 05:47:58.176624   51953 command_runner.go:130] >         "Config": {
	I1210 05:47:58.176629   51953 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1210 05:47:58.176644   51953 command_runner.go:130] >           "Name": "cni-loopback",
	I1210 05:47:58.176648   51953 command_runner.go:130] >           "Plugins": [
	I1210 05:47:58.176652   51953 command_runner.go:130] >             {
	I1210 05:47:58.176657   51953 command_runner.go:130] >               "Network": {
	I1210 05:47:58.176662   51953 command_runner.go:130] >                 "ipam": {},
	I1210 05:47:58.176673   51953 command_runner.go:130] >                 "type": "loopback"
	I1210 05:47:58.176678   51953 command_runner.go:130] >               },
	I1210 05:47:58.176687   51953 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1210 05:47:58.176691   51953 command_runner.go:130] >             }
	I1210 05:47:58.176694   51953 command_runner.go:130] >           ],
	I1210 05:47:58.176704   51953 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1210 05:47:58.176717   51953 command_runner.go:130] >         },
	I1210 05:47:58.176725   51953 command_runner.go:130] >         "IFName": "lo"
	I1210 05:47:58.176728   51953 command_runner.go:130] >       }
	I1210 05:47:58.176732   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176736   51953 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1210 05:47:58.176742   51953 command_runner.go:130] >     "PluginDirs": [
	I1210 05:47:58.176746   51953 command_runner.go:130] >       "/opt/cni/bin"
	I1210 05:47:58.176752   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176756   51953 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1210 05:47:58.176771   51953 command_runner.go:130] >     "Prefix": "eth"
	I1210 05:47:58.176775   51953 command_runner.go:130] >   },
	I1210 05:47:58.176782   51953 command_runner.go:130] >   "config": {
	I1210 05:47:58.176789   51953 command_runner.go:130] >     "cdiSpecDirs": [
	I1210 05:47:58.176793   51953 command_runner.go:130] >       "/etc/cdi",
	I1210 05:47:58.176797   51953 command_runner.go:130] >       "/var/run/cdi"
	I1210 05:47:58.176803   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176807   51953 command_runner.go:130] >     "cni": {
	I1210 05:47:58.176813   51953 command_runner.go:130] >       "binDir": "",
	I1210 05:47:58.176817   51953 command_runner.go:130] >       "binDirs": [
	I1210 05:47:58.176821   51953 command_runner.go:130] >         "/opt/cni/bin"
	I1210 05:47:58.176825   51953 command_runner.go:130] >       ],
	I1210 05:47:58.176836   51953 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1210 05:47:58.176840   51953 command_runner.go:130] >       "confTemplate": "",
	I1210 05:47:58.176844   51953 command_runner.go:130] >       "ipPref": "",
	I1210 05:47:58.176850   51953 command_runner.go:130] >       "maxConfNum": 1,
	I1210 05:47:58.176854   51953 command_runner.go:130] >       "setupSerially": false,
	I1210 05:47:58.176861   51953 command_runner.go:130] >       "useInternalLoopback": false
	I1210 05:47:58.176864   51953 command_runner.go:130] >     },
	I1210 05:47:58.176874   51953 command_runner.go:130] >     "containerd": {
	I1210 05:47:58.176880   51953 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1210 05:47:58.176886   51953 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1210 05:47:58.176892   51953 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1210 05:47:58.176901   51953 command_runner.go:130] >       "runtimes": {
	I1210 05:47:58.176905   51953 command_runner.go:130] >         "runc": {
	I1210 05:47:58.176909   51953 command_runner.go:130] >           "ContainerAnnotations": null,
	I1210 05:47:58.176915   51953 command_runner.go:130] >           "PodAnnotations": null,
	I1210 05:47:58.176920   51953 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1210 05:47:58.176926   51953 command_runner.go:130] >           "cgroupWritable": false,
	I1210 05:47:58.176930   51953 command_runner.go:130] >           "cniConfDir": "",
	I1210 05:47:58.176934   51953 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1210 05:47:58.176939   51953 command_runner.go:130] >           "io_type": "",
	I1210 05:47:58.176943   51953 command_runner.go:130] >           "options": {
	I1210 05:47:58.176950   51953 command_runner.go:130] >             "BinaryName": "",
	I1210 05:47:58.176955   51953 command_runner.go:130] >             "CriuImagePath": "",
	I1210 05:47:58.176970   51953 command_runner.go:130] >             "CriuWorkPath": "",
	I1210 05:47:58.176977   51953 command_runner.go:130] >             "IoGid": 0,
	I1210 05:47:58.176981   51953 command_runner.go:130] >             "IoUid": 0,
	I1210 05:47:58.176985   51953 command_runner.go:130] >             "NoNewKeyring": false,
	I1210 05:47:58.176991   51953 command_runner.go:130] >             "Root": "",
	I1210 05:47:58.176995   51953 command_runner.go:130] >             "ShimCgroup": "",
	I1210 05:47:58.177002   51953 command_runner.go:130] >             "SystemdCgroup": false
	I1210 05:47:58.177005   51953 command_runner.go:130] >           },
	I1210 05:47:58.177011   51953 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1210 05:47:58.177019   51953 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1210 05:47:58.177023   51953 command_runner.go:130] >           "runtimePath": "",
	I1210 05:47:58.177030   51953 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1210 05:47:58.177035   51953 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1210 05:47:58.177041   51953 command_runner.go:130] >           "snapshotter": ""
	I1210 05:47:58.177044   51953 command_runner.go:130] >         }
	I1210 05:47:58.177049   51953 command_runner.go:130] >       }
	I1210 05:47:58.177052   51953 command_runner.go:130] >     },
	I1210 05:47:58.177065   51953 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1210 05:47:58.177073   51953 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1210 05:47:58.177078   51953 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1210 05:47:58.177083   51953 command_runner.go:130] >     "disableApparmor": false,
	I1210 05:47:58.177090   51953 command_runner.go:130] >     "disableHugetlbController": true,
	I1210 05:47:58.177094   51953 command_runner.go:130] >     "disableProcMount": false,
	I1210 05:47:58.177098   51953 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1210 05:47:58.177102   51953 command_runner.go:130] >     "enableCDI": true,
	I1210 05:47:58.177106   51953 command_runner.go:130] >     "enableSelinux": false,
	I1210 05:47:58.177114   51953 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1210 05:47:58.177118   51953 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1210 05:47:58.177125   51953 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1210 05:47:58.177130   51953 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1210 05:47:58.177138   51953 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1210 05:47:58.177142   51953 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1210 05:47:58.177147   51953 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1210 05:47:58.177160   51953 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1210 05:47:58.177170   51953 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1210 05:47:58.177176   51953 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1210 05:47:58.177186   51953 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1210 05:47:58.177190   51953 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1210 05:47:58.177193   51953 command_runner.go:130] >   },
	I1210 05:47:58.177197   51953 command_runner.go:130] >   "features": {
	I1210 05:47:58.177201   51953 command_runner.go:130] >     "supplemental_groups_policy": true
	I1210 05:47:58.177204   51953 command_runner.go:130] >   },
	I1210 05:47:58.177209   51953 command_runner.go:130] >   "golang": "go1.24.9",
	I1210 05:47:58.177221   51953 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 05:47:58.177233   51953 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 05:47:58.177237   51953 command_runner.go:130] >   "runtimeHandlers": [
	I1210 05:47:58.177246   51953 command_runner.go:130] >     {
	I1210 05:47:58.177250   51953 command_runner.go:130] >       "features": {
	I1210 05:47:58.177255   51953 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 05:47:58.177259   51953 command_runner.go:130] >         "user_namespaces": true
	I1210 05:47:58.177261   51953 command_runner.go:130] >       }
	I1210 05:47:58.177264   51953 command_runner.go:130] >     },
	I1210 05:47:58.177267   51953 command_runner.go:130] >     {
	I1210 05:47:58.177271   51953 command_runner.go:130] >       "features": {
	I1210 05:47:58.177275   51953 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 05:47:58.177279   51953 command_runner.go:130] >         "user_namespaces": true
	I1210 05:47:58.177282   51953 command_runner.go:130] >       },
	I1210 05:47:58.177287   51953 command_runner.go:130] >       "name": "runc"
	I1210 05:47:58.177289   51953 command_runner.go:130] >     }
	I1210 05:47:58.177293   51953 command_runner.go:130] >   ],
	I1210 05:47:58.177296   51953 command_runner.go:130] >   "status": {
	I1210 05:47:58.177300   51953 command_runner.go:130] >     "conditions": [
	I1210 05:47:58.177303   51953 command_runner.go:130] >       {
	I1210 05:47:58.177307   51953 command_runner.go:130] >         "message": "",
	I1210 05:47:58.177314   51953 command_runner.go:130] >         "reason": "",
	I1210 05:47:58.177318   51953 command_runner.go:130] >         "status": true,
	I1210 05:47:58.177329   51953 command_runner.go:130] >         "type": "RuntimeReady"
	I1210 05:47:58.177335   51953 command_runner.go:130] >       },
	I1210 05:47:58.177339   51953 command_runner.go:130] >       {
	I1210 05:47:58.177345   51953 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1210 05:47:58.177356   51953 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1210 05:47:58.177360   51953 command_runner.go:130] >         "status": false,
	I1210 05:47:58.177365   51953 command_runner.go:130] >         "type": "NetworkReady"
	I1210 05:47:58.177373   51953 command_runner.go:130] >       },
	I1210 05:47:58.177376   51953 command_runner.go:130] >       {
	I1210 05:47:58.177397   51953 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1210 05:47:58.177406   51953 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1210 05:47:58.177414   51953 command_runner.go:130] >         "status": false,
	I1210 05:47:58.177420   51953 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1210 05:47:58.177425   51953 command_runner.go:130] >       }
	I1210 05:47:58.177428   51953 command_runner.go:130] >     ]
	I1210 05:47:58.177431   51953 command_runner.go:130] >   }
	I1210 05:47:58.177434   51953 command_runner.go:130] > }
	I1210 05:47:58.177746   51953 cni.go:84] Creating CNI manager for ""
	I1210 05:47:58.177762   51953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:47:58.177786   51953 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:47:58.177809   51953 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644034 NodeName:functional-644034 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:47:58.177931   51953 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-644034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:47:58.178005   51953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:47:58.184894   51953 command_runner.go:130] > kubeadm
	I1210 05:47:58.184912   51953 command_runner.go:130] > kubectl
	I1210 05:47:58.184916   51953 command_runner.go:130] > kubelet
	I1210 05:47:58.185786   51953 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:47:58.185866   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:47:58.193140   51953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 05:47:58.205426   51953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:47:58.217773   51953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 05:47:58.230424   51953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:47:58.234124   51953 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 05:47:58.234224   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:58.348721   51953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:47:58.367663   51953 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034 for IP: 192.168.49.2
	I1210 05:47:58.367683   51953 certs.go:195] generating shared ca certs ...
	I1210 05:47:58.367699   51953 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:58.367828   51953 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 05:47:58.367870   51953 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 05:47:58.367878   51953 certs.go:257] generating profile certs ...
	I1210 05:47:58.367976   51953 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key
	I1210 05:47:58.368034   51953 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c
	I1210 05:47:58.368079   51953 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key
	I1210 05:47:58.368088   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 05:47:58.368100   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 05:47:58.368115   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 05:47:58.368126   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 05:47:58.368137   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 05:47:58.368148   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 05:47:58.368163   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 05:47:58.368174   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 05:47:58.368220   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 05:47:58.368248   51953 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 05:47:58.368256   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:47:58.368286   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:47:58.368309   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:47:58.368331   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 05:47:58.368373   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:47:58.368402   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.368414   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.368427   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.368978   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:47:58.388893   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:47:58.409416   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:47:58.428450   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:47:58.446489   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:47:58.465644   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:47:58.483264   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:47:58.500807   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:47:58.518107   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 05:47:58.536070   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 05:47:58.553632   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:47:58.571692   51953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:47:58.584898   51953 ssh_runner.go:195] Run: openssl version
	I1210 05:47:58.590608   51953 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 05:47:58.591139   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.599076   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 05:47:58.606632   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610200   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610255   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610308   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.650574   51953 command_runner.go:130] > 51391683
	I1210 05:47:58.651004   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:47:58.658249   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.665388   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 05:47:58.672651   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676295   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676329   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676381   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.716661   51953 command_runner.go:130] > 3ec20f2e
	I1210 05:47:58.717156   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:47:58.724496   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.731755   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:47:58.739224   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742739   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742773   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742827   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.783109   51953 command_runner.go:130] > b5213941
	I1210 05:47:58.783531   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:47:58.790793   51953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:47:58.794232   51953 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:47:58.794258   51953 command_runner.go:130] >   Size: 1172      	Blocks: 8          IO Block: 4096   regular file
	I1210 05:47:58.794265   51953 command_runner.go:130] > Device: 259,1	Inode: 1307887     Links: 1
	I1210 05:47:58.794272   51953 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:47:58.794286   51953 command_runner.go:130] > Access: 2025-12-10 05:43:51.022657545 +0000
	I1210 05:47:58.794292   51953 command_runner.go:130] > Modify: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794297   51953 command_runner.go:130] > Change: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794305   51953 command_runner.go:130] >  Birth: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794558   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:47:58.837377   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.837465   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:47:58.877636   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.878121   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:47:58.918797   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.919235   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:47:58.959487   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.960010   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:47:59.003251   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:59.003763   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:47:59.044279   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:59.044747   51953 kubeadm.go:401] StartCluster: {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:59.044823   51953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 05:47:59.044887   51953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:47:59.069970   51953 cri.go:89] found id: ""
	I1210 05:47:59.070038   51953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:47:59.076652   51953 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 05:47:59.076673   51953 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 05:47:59.076679   51953 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 05:47:59.077535   51953 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:47:59.077555   51953 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:47:59.077617   51953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:47:59.084671   51953 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:47:59.085448   51953 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-644034" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.085850   51953 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "functional-644034" cluster setting kubeconfig missing "functional-644034" context setting]
	I1210 05:47:59.086310   51953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.087190   51953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.087371   51953 kapi.go:59] client config for functional-644034: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key", CAFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:47:59.088034   51953 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 05:47:59.088055   51953 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 05:47:59.088068   51953 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 05:47:59.088074   51953 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 05:47:59.088078   51953 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 05:47:59.088429   51953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:47:59.089407   51953 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:47:59.096980   51953 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 05:47:59.097014   51953 kubeadm.go:602] duration metric: took 19.453757ms to restartPrimaryControlPlane
	I1210 05:47:59.097024   51953 kubeadm.go:403] duration metric: took 52.281886ms to StartCluster
	I1210 05:47:59.097064   51953 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.097152   51953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.097734   51953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.097941   51953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 05:47:59.098267   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:59.098318   51953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 05:47:59.098380   51953 addons.go:70] Setting storage-provisioner=true in profile "functional-644034"
	I1210 05:47:59.098393   51953 addons.go:239] Setting addon storage-provisioner=true in "functional-644034"
	I1210 05:47:59.098419   51953 host.go:66] Checking if "functional-644034" exists ...
	I1210 05:47:59.098907   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.101905   51953 out.go:179] * Verifying Kubernetes components...
	I1210 05:47:59.106662   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:59.109785   51953 addons.go:70] Setting default-storageclass=true in profile "functional-644034"
	I1210 05:47:59.109823   51953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-644034"
	I1210 05:47:59.110155   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.137186   51953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:47:59.140065   51953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:47:59.140094   51953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:47:59.140172   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:59.152137   51953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.152308   51953 kapi.go:59] client config for functional-644034: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key", CAFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:47:59.152605   51953 addons.go:239] Setting addon default-storageclass=true in "functional-644034"
	I1210 05:47:59.152636   51953 host.go:66] Checking if "functional-644034" exists ...
	I1210 05:47:59.153047   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.173160   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:59.202277   51953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:47:59.202307   51953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:47:59.202368   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:59.232670   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:59.321380   51953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:47:59.337472   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:47:59.374986   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:00.169551   51953 node_ready.go:35] waiting up to 6m0s for node "functional-644034" to be "Ready" ...
	I1210 05:48:00.169689   51953 type.go:168] "Request Body" body=""
	I1210 05:48:00.169752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:00.170008   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.170051   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170077   51953 retry.go:31] will retry after 139.03743ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170121   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.170135   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170145   51953 retry.go:31] will retry after 348.331986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:00.310507   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:00.415931   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.416069   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.416135   51953 retry.go:31] will retry after 233.204425ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.519312   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:00.585157   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.585240   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.585274   51953 retry.go:31] will retry after 499.606359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.650447   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:00.669993   51953 type.go:168] "Request Body" body=""
	I1210 05:48:00.670098   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:00.670433   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:00.712181   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.715417   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.715449   51953 retry.go:31] will retry after 781.025556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.086035   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:01.148055   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.148095   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.148115   51953 retry.go:31] will retry after 644.355236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.170281   51953 type.go:168] "Request Body" body=""
	I1210 05:48:01.170372   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:01.170734   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:01.497246   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:01.552133   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.555247   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.555278   51953 retry.go:31] will retry after 1.200680207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.670555   51953 type.go:168] "Request Body" body=""
	I1210 05:48:01.670646   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:01.670959   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:01.793341   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:01.851452   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.854727   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.854768   51953 retry.go:31] will retry after 727.381606ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.170188   51953 type.go:168] "Request Body" body=""
	I1210 05:48:02.170290   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:02.170618   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:02.170696   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:02.583237   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:02.649935   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:02.649981   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.650022   51953 retry.go:31] will retry after 1.310515996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.670155   51953 type.go:168] "Request Body" body=""
	I1210 05:48:02.670292   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:02.670651   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:02.757075   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:02.818837   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:02.821796   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.821831   51953 retry.go:31] will retry after 1.687874073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:03.170317   51953 type.go:168] "Request Body" body=""
	I1210 05:48:03.170406   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:03.170707   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:03.670505   51953 type.go:168] "Request Body" body=""
	I1210 05:48:03.670583   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:03.670925   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:03.961404   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:04.024244   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:04.024282   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.024323   51953 retry.go:31] will retry after 1.628415395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.170524   51953 type.go:168] "Request Body" body=""
	I1210 05:48:04.170651   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:04.171067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:04.171129   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:04.510724   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:04.566617   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:04.570030   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.570064   51953 retry.go:31] will retry after 2.695563296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.670310   51953 type.go:168] "Request Body" body=""
	I1210 05:48:04.670389   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:04.670711   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.170563   51953 type.go:168] "Request Body" body=""
	I1210 05:48:05.170635   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:05.170967   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.653658   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:05.670351   51953 type.go:168] "Request Body" body=""
	I1210 05:48:05.670461   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:05.670799   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.744168   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:05.744207   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:05.744248   51953 retry.go:31] will retry after 1.470532715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:06.169848   51953 type.go:168] "Request Body" body=""
	I1210 05:48:06.169975   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:06.170317   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:06.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:48:06.669921   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:06.670264   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:06.670329   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:07.169978   51953 type.go:168] "Request Body" body=""
	I1210 05:48:07.170058   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:07.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:07.215626   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:07.266052   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:07.280336   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:07.280370   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.280387   51953 retry.go:31] will retry after 5.58106306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.333195   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:07.333236   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.333256   51953 retry.go:31] will retry after 2.610344026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.670753   51953 type.go:168] "Request Body" body=""
	I1210 05:48:07.670832   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:07.671195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:08.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:48:08.169912   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:08.170281   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:08.669773   51953 type.go:168] "Request Body" body=""
	I1210 05:48:08.669870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:08.670131   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:09.170132   51953 type.go:168] "Request Body" body=""
	I1210 05:48:09.170205   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:09.170536   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:09.170594   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:09.670237   51953 type.go:168] "Request Body" body=""
	I1210 05:48:09.670311   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:09.670667   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:09.944159   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:10.010561   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:10.010619   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:10.010642   51953 retry.go:31] will retry after 2.5620788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:10.169787   51953 type.go:168] "Request Body" body=""
	I1210 05:48:10.169854   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:10.170167   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:10.669895   51953 type.go:168] "Request Body" body=""
	I1210 05:48:10.669974   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:10.670363   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:11.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:48:11.169913   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:11.170234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:11.669750   51953 type.go:168] "Request Body" body=""
	I1210 05:48:11.669833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:11.670159   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:11.670233   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:12.169956   51953 type.go:168] "Request Body" body=""
	I1210 05:48:12.170030   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:12.170375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:12.572886   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:12.631295   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:12.634400   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.634432   51953 retry.go:31] will retry after 5.90622422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.670736   51953 type.go:168] "Request Body" body=""
	I1210 05:48:12.670808   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:12.671172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:12.862533   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:12.918893   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:12.918929   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.918949   51953 retry.go:31] will retry after 8.272023324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:13.170464   51953 type.go:168] "Request Body" body=""
	I1210 05:48:13.170532   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:13.170809   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:13.670589   51953 type.go:168] "Request Body" body=""
	I1210 05:48:13.670665   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:13.670979   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:13.671051   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:14.170623   51953 type.go:168] "Request Body" body=""
	I1210 05:48:14.170704   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:14.171052   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:14.669975   51953 type.go:168] "Request Body" body=""
	I1210 05:48:14.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:14.670351   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:15.170046   51953 type.go:168] "Request Body" body=""
	I1210 05:48:15.170119   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:15.170417   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:15.670099   51953 type.go:168] "Request Body" body=""
	I1210 05:48:15.670181   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:15.670515   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:16.169772   51953 type.go:168] "Request Body" body=""
	I1210 05:48:16.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:16.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:16.170210   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:16.669859   51953 type.go:168] "Request Body" body=""
	I1210 05:48:16.669945   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:16.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:17.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:48:17.169889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:17.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:17.669877   51953 type.go:168] "Request Body" body=""
	I1210 05:48:17.669969   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:17.670225   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:18.169971   51953 type.go:168] "Request Body" body=""
	I1210 05:48:18.170045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:18.170383   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:18.170445   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:18.540818   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:18.598871   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:18.601811   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:18.601841   51953 retry.go:31] will retry after 12.747843498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:18.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:18.670272   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:18.670582   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:19.170370   51953 type.go:168] "Request Body" body=""
	I1210 05:48:19.170452   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:19.170779   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:19.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:48:19.670779   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:19.671114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:20.169841   51953 type.go:168] "Request Body" body=""
	I1210 05:48:20.169920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:20.170286   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:20.669765   51953 type.go:168] "Request Body" body=""
	I1210 05:48:20.669841   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:20.670151   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:20.670209   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:21.169914   51953 type.go:168] "Request Body" body=""
	I1210 05:48:21.169987   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:21.170308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:21.191680   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:21.254244   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:21.254291   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:21.254309   51953 retry.go:31] will retry after 13.504528238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:21.669784   51953 type.go:168] "Request Body" body=""
	I1210 05:48:21.669884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:21.670234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:22.169864   51953 type.go:168] "Request Body" body=""
	I1210 05:48:22.169979   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:22.170274   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:22.670052   51953 type.go:168] "Request Body" body=""
	I1210 05:48:22.670132   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:22.670457   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:22.670511   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:23.170156   51953 type.go:168] "Request Body" body=""
	I1210 05:48:23.170275   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:23.170563   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:23.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:48:23.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:23.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:24.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:48:24.169911   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:24.170218   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:24.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:24.670237   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:24.670543   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:24.670597   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:25.170342   51953 type.go:168] "Request Body" body=""
	I1210 05:48:25.170412   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:25.170680   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:25.670543   51953 type.go:168] "Request Body" body=""
	I1210 05:48:25.670623   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:25.670919   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:26.170671   51953 type.go:168] "Request Body" body=""
	I1210 05:48:26.170749   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:26.171110   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:26.669682   51953 type.go:168] "Request Body" body=""
	I1210 05:48:26.669752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:26.670007   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:27.170402   51953 type.go:168] "Request Body" body=""
	I1210 05:48:27.170479   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:27.170798   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:27.170859   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:27.670357   51953 type.go:168] "Request Body" body=""
	I1210 05:48:27.670437   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:27.670773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:28.170551   51953 type.go:168] "Request Body" body=""
	I1210 05:48:28.170643   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:28.170896   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:28.670265   51953 type.go:168] "Request Body" body=""
	I1210 05:48:28.670338   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:28.670758   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:29.170472   51953 type.go:168] "Request Body" body=""
	I1210 05:48:29.170542   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:29.170877   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:29.170933   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:29.669736   51953 type.go:168] "Request Body" body=""
	I1210 05:48:29.669810   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:29.670135   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:30.169864   51953 type.go:168] "Request Body" body=""
	I1210 05:48:30.169940   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:30.170305   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:30.669879   51953 type.go:168] "Request Body" body=""
	I1210 05:48:30.669957   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:30.670279   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:31.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:48:31.169834   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:31.170092   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:31.350447   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:31.407735   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:31.410898   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:31.410931   51953 retry.go:31] will retry after 18.518112559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:31.670455   51953 type.go:168] "Request Body" body=""
	I1210 05:48:31.670542   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:31.670952   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:31.671005   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:32.170764   51953 type.go:168] "Request Body" body=""
	I1210 05:48:32.170837   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:32.171167   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:32.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:48:32.669900   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:32.670158   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:33.169858   51953 type.go:168] "Request Body" body=""
	I1210 05:48:33.169936   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:33.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:33.669974   51953 type.go:168] "Request Body" body=""
	I1210 05:48:33.670051   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:33.670366   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:34.170663   51953 type.go:168] "Request Body" body=""
	I1210 05:48:34.170730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:34.171001   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:34.171083   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:34.670065   51953 type.go:168] "Request Body" body=""
	I1210 05:48:34.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:34.670459   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:34.759888   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:34.813991   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:34.817148   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:34.817180   51953 retry.go:31] will retry after 7.858877757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:35.170714   51953 type.go:168] "Request Body" body=""
	I1210 05:48:35.170783   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:35.171144   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:35.669794   51953 type.go:168] "Request Body" body=""
	I1210 05:48:35.669884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:35.670145   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:36.169862   51953 type.go:168] "Request Body" body=""
	I1210 05:48:36.169932   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:36.170264   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:36.669949   51953 type.go:168] "Request Body" body=""
	I1210 05:48:36.670019   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:36.670336   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:36.670392   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:37.170023   51953 type.go:168] "Request Body" body=""
	I1210 05:48:37.170089   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:37.170351   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:37.670112   51953 type.go:168] "Request Body" body=""
	I1210 05:48:37.670187   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:37.670504   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:38.170212   51953 type.go:168] "Request Body" body=""
	I1210 05:48:38.170304   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:38.170601   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:38.670326   51953 type.go:168] "Request Body" body=""
	I1210 05:48:38.670390   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:38.670677   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:38.670718   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:39.170413   51953 type.go:168] "Request Body" body=""
	I1210 05:48:39.170487   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:39.170808   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:39.669722   51953 type.go:168] "Request Body" body=""
	I1210 05:48:39.669794   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:39.670121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:40.169742   51953 type.go:168] "Request Body" body=""
	I1210 05:48:40.169816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:40.170090   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:40.669786   51953 type.go:168] "Request Body" body=""
	I1210 05:48:40.669863   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:40.670230   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:41.169931   51953 type.go:168] "Request Body" body=""
	I1210 05:48:41.170003   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:41.170334   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:41.170388   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:41.670036   51953 type.go:168] "Request Body" body=""
	I1210 05:48:41.670109   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:41.670415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.170132   51953 type.go:168] "Request Body" body=""
	I1210 05:48:42.170213   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:42.170632   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.670451   51953 type.go:168] "Request Body" body=""
	I1210 05:48:42.670533   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:42.670872   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.677131   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:42.736218   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:42.736261   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:42.736279   51953 retry.go:31] will retry after 23.425189001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:43.170386   51953 type.go:168] "Request Body" body=""
	I1210 05:48:43.170464   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:43.170737   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:43.170779   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:43.670538   51953 type.go:168] "Request Body" body=""
	I1210 05:48:43.670609   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:43.670906   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:44.170640   51953 type.go:168] "Request Body" body=""
	I1210 05:48:44.170719   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:44.171057   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:44.669915   51953 type.go:168] "Request Body" body=""
	I1210 05:48:44.669983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:44.670265   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:45.170036   51953 type.go:168] "Request Body" body=""
	I1210 05:48:45.170123   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:45.175201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1210 05:48:45.175287   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:45.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:48:45.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:45.670195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:46.170498   51953 type.go:168] "Request Body" body=""
	I1210 05:48:46.170576   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:46.170876   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:46.670607   51953 type.go:168] "Request Body" body=""
	I1210 05:48:46.670701   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:46.671031   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:47.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:48:47.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:47.170154   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:47.669733   51953 type.go:168] "Request Body" body=""
	I1210 05:48:47.669806   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:47.670071   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:47.670117   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:48.169793   51953 type.go:168] "Request Body" body=""
	I1210 05:48:48.169879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:48.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:48.669835   51953 type.go:168] "Request Body" body=""
	I1210 05:48:48.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:48.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:49.170055   51953 type.go:168] "Request Body" body=""
	I1210 05:48:49.170124   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:49.170378   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:49.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:49.670235   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:49.670525   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:49.670571   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:49.930022   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:49.989791   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:49.993079   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:49.993114   51953 retry.go:31] will retry after 23.38662002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:50.170615   51953 type.go:168] "Request Body" body=""
	I1210 05:48:50.170692   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:50.171002   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:50.669688   51953 type.go:168] "Request Body" body=""
	I1210 05:48:50.669757   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:50.670060   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:51.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:48:51.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:51.170229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:51.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:48:51.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:51.670261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:52.169845   51953 type.go:168] "Request Body" body=""
	I1210 05:48:52.169924   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:52.170187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:52.170237   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:52.669804   51953 type.go:168] "Request Body" body=""
	I1210 05:48:52.669873   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:52.670187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:53.169870   51953 type.go:168] "Request Body" body=""
	I1210 05:48:53.169941   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:53.170273   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:53.669765   51953 type.go:168] "Request Body" body=""
	I1210 05:48:53.669863   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:53.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:54.169803   51953 type.go:168] "Request Body" body=""
	I1210 05:48:54.169877   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:54.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:54.170270   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:54.670065   51953 type.go:168] "Request Body" body=""
	I1210 05:48:54.670136   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:54.670470   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:55.169793   51953 type.go:168] "Request Body" body=""
	I1210 05:48:55.169876   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:55.170142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:55.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:48:55.669919   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:55.670247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:56.169832   51953 type.go:168] "Request Body" body=""
	I1210 05:48:56.169907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:56.170229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:56.669896   51953 type.go:168] "Request Body" body=""
	I1210 05:48:56.669967   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:56.670287   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:56.670338   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:57.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:48:57.169898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:57.170223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:57.669803   51953 type.go:168] "Request Body" body=""
	I1210 05:48:57.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:57.670238   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:58.169908   51953 type.go:168] "Request Body" body=""
	I1210 05:48:58.169985   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:58.170322   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:58.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:48:58.670098   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:58.670445   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:58.670497   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:59.170301   51953 type.go:168] "Request Body" body=""
	I1210 05:48:59.170378   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:59.170749   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:59.670557   51953 type.go:168] "Request Body" body=""
	I1210 05:48:59.670633   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:59.670949   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:00.169725   51953 type.go:168] "Request Body" body=""
	I1210 05:49:00.169813   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:00.170141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:00.670083   51953 type.go:168] "Request Body" body=""
	I1210 05:49:00.670159   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:00.670486   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:00.670533   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:01.169951   51953 type.go:168] "Request Body" body=""
	I1210 05:49:01.170038   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:01.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:01.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:01.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:01.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:02.169846   51953 type.go:168] "Request Body" body=""
	I1210 05:49:02.169918   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:02.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:02.669747   51953 type.go:168] "Request Body" body=""
	I1210 05:49:02.669829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:02.670143   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:03.169862   51953 type.go:168] "Request Body" body=""
	I1210 05:49:03.169937   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:03.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:03.170307   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:03.669983   51953 type.go:168] "Request Body" body=""
	I1210 05:49:03.670055   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:03.670401   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:04.169978   51953 type.go:168] "Request Body" body=""
	I1210 05:49:04.170070   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:04.170429   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:04.670184   51953 type.go:168] "Request Body" body=""
	I1210 05:49:04.670254   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:04.670541   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:05.169853   51953 type.go:168] "Request Body" body=""
	I1210 05:49:05.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:05.170261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:05.669805   51953 type.go:168] "Request Body" body=""
	I1210 05:49:05.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:05.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:05.670209   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:06.161707   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:49:06.170118   51953 type.go:168] "Request Body" body=""
	I1210 05:49:06.170187   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:06.170454   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:06.215983   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:06.219418   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:06.219449   51953 retry.go:31] will retry after 38.750779649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:06.669785   51953 type.go:168] "Request Body" body=""
	I1210 05:49:06.669865   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:06.670186   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:07.169875   51953 type.go:168] "Request Body" body=""
	I1210 05:49:07.169941   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:07.170192   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:07.669935   51953 type.go:168] "Request Body" body=""
	I1210 05:49:07.670005   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:07.670350   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:07.670403   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:08.170063   51953 type.go:168] "Request Body" body=""
	I1210 05:49:08.170142   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:08.170510   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:08.670188   51953 type.go:168] "Request Body" body=""
	I1210 05:49:08.670268   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:08.670583   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:09.170358   51953 type.go:168] "Request Body" body=""
	I1210 05:49:09.170435   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:09.170718   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:09.670114   51953 type.go:168] "Request Body" body=""
	I1210 05:49:09.670186   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:09.670501   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:09.670595   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:10.170242   51953 type.go:168] "Request Body" body=""
	I1210 05:49:10.170308   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:10.170650   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:10.670454   51953 type.go:168] "Request Body" body=""
	I1210 05:49:10.670533   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:10.670873   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:11.170681   51953 type.go:168] "Request Body" body=""
	I1210 05:49:11.170756   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:11.171117   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:11.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:49:11.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:11.670185   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:12.169855   51953 type.go:168] "Request Body" body=""
	I1210 05:49:12.169925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:12.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:12.170304   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:12.669856   51953 type.go:168] "Request Body" body=""
	I1210 05:49:12.669928   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:12.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:13.169860   51953 type.go:168] "Request Body" body=""
	I1210 05:49:13.169943   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:13.170217   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:13.380712   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:49:13.443508   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:13.443549   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:13.443568   51953 retry.go:31] will retry after 17.108062036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:13.669825   51953 type.go:168] "Request Body" body=""
	I1210 05:49:13.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:13.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:14.169973   51953 type.go:168] "Request Body" body=""
	I1210 05:49:14.170046   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:14.170360   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:14.170413   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:14.670243   51953 type.go:168] "Request Body" body=""
	I1210 05:49:14.670320   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:14.670588   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:15.170418   51953 type.go:168] "Request Body" body=""
	I1210 05:49:15.170487   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:15.170795   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:15.670586   51953 type.go:168] "Request Body" body=""
	I1210 05:49:15.670658   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:15.670975   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:16.170704   51953 type.go:168] "Request Body" body=""
	I1210 05:49:16.170776   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:16.171067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:16.171120   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:16.669813   51953 type.go:168] "Request Body" body=""
	I1210 05:49:16.669905   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:16.670255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:17.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:49:17.169899   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:17.170218   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:17.669769   51953 type.go:168] "Request Body" body=""
	I1210 05:49:17.669844   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:17.670143   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:18.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:49:18.169934   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:18.170269   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:18.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:49:18.670094   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:18.670415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:18.670472   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:19.170323   51953 type.go:168] "Request Body" body=""
	I1210 05:49:19.170395   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:19.170661   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:19.670601   51953 type.go:168] "Request Body" body=""
	I1210 05:49:19.670672   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:19.670993   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:20.169740   51953 type.go:168] "Request Body" body=""
	I1210 05:49:20.169817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:20.170150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:20.670516   51953 type.go:168] "Request Body" body=""
	I1210 05:49:20.670584   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:20.670897   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:20.670954   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:21.170713   51953 type.go:168] "Request Body" body=""
	I1210 05:49:21.170783   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:21.171082   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:21.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:49:21.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:21.670172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:22.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:49:22.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:22.170106   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:22.669822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:22.669894   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:22.670229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:23.169802   51953 type.go:168] "Request Body" body=""
	I1210 05:49:23.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:23.170200   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:23.170257   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:23.669758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:23.669829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:23.670132   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:24.169822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:24.169902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:24.170262   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:24.670129   51953 type.go:168] "Request Body" body=""
	I1210 05:49:24.670207   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:24.670559   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:25.170449   51953 type.go:168] "Request Body" body=""
	I1210 05:49:25.170521   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:25.170831   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:25.170881   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:25.670585   51953 type.go:168] "Request Body" body=""
	I1210 05:49:25.670666   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:25.671038   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:26.170684   51953 type.go:168] "Request Body" body=""
	I1210 05:49:26.170760   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:26.171104   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:26.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:49:26.669864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:26.670150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:27.169852   51953 type.go:168] "Request Body" body=""
	I1210 05:49:27.169930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:27.170272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:27.669984   51953 type.go:168] "Request Body" body=""
	I1210 05:49:27.670061   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:27.670384   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:27.670440   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:28.169751   51953 type.go:168] "Request Body" body=""
	I1210 05:49:28.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:28.170155   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:28.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:49:28.669874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:28.670210   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:29.170062   51953 type.go:168] "Request Body" body=""
	I1210 05:49:29.170136   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:29.170491   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:29.670199   51953 type.go:168] "Request Body" body=""
	I1210 05:49:29.670274   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:29.670550   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:29.670593   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:30.170374   51953 type.go:168] "Request Body" body=""
	I1210 05:49:30.170446   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:30.170838   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:30.552353   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:49:30.608474   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:30.608517   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:30.608604   51953 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:49:30.670690   51953 type.go:168] "Request Body" body=""
	I1210 05:49:30.670767   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:30.671090   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:31.169783   51953 type.go:168] "Request Body" body=""
	I1210 05:49:31.169860   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:31.170226   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:31.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:31.669889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:31.670241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:32.169940   51953 type.go:168] "Request Body" body=""
	I1210 05:49:32.170013   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:32.170338   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:32.170396   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:32.670045   51953 type.go:168] "Request Body" body=""
	I1210 05:49:32.670119   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:32.670396   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:33.169834   51953 type.go:168] "Request Body" body=""
	I1210 05:49:33.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:33.170309   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:33.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:33.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:33.670201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:34.169903   51953 type.go:168] "Request Body" body=""
	I1210 05:49:34.169973   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:34.170269   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:34.670193   51953 type.go:168] "Request Body" body=""
	I1210 05:49:34.670266   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:34.670601   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:34.670655   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:35.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:49:35.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:35.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:35.669756   51953 type.go:168] "Request Body" body=""
	I1210 05:49:35.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:35.670131   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:36.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:49:36.169901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:36.170223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:36.669946   51953 type.go:168] "Request Body" body=""
	I1210 05:49:36.670020   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:36.670367   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:37.170034   51953 type.go:168] "Request Body" body=""
	I1210 05:49:37.170108   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:37.170407   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:37.170461   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:37.669822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:37.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:37.670249   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:38.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:49:38.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:38.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:38.669935   51953 type.go:168] "Request Body" body=""
	I1210 05:49:38.670003   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:38.670313   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:39.170298   51953 type.go:168] "Request Body" body=""
	I1210 05:49:39.170373   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:39.170717   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:39.170771   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:39.670468   51953 type.go:168] "Request Body" body=""
	I1210 05:49:39.670545   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:39.670883   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:40.170669   51953 type.go:168] "Request Body" body=""
	I1210 05:49:40.170737   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:40.171069   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:40.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:49:40.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:40.670211   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:41.169813   51953 type.go:168] "Request Body" body=""
	I1210 05:49:41.169884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:41.170198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:41.669764   51953 type.go:168] "Request Body" body=""
	I1210 05:49:41.669859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:41.670152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:41.670193   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:42.169845   51953 type.go:168] "Request Body" body=""
	I1210 05:49:42.169948   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:42.170319   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:42.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:49:42.669885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:42.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:43.169758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:43.169831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:43.170096   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:43.669848   51953 type.go:168] "Request Body" body=""
	I1210 05:49:43.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:43.670267   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:43.670317   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:44.169816   51953 type.go:168] "Request Body" body=""
	I1210 05:49:44.169893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:44.170187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:44.670057   51953 type.go:168] "Request Body" body=""
	I1210 05:49:44.670140   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:44.670470   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:44.970959   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:49:45.060109   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:45.064226   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:45.064337   51953 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:49:45.067552   51953 out.go:179] * Enabled addons: 
	I1210 05:49:45.070225   51953 addons.go:530] duration metric: took 1m45.971891823s for enable addons: enabled=[]
	I1210 05:49:45.169999   51953 type.go:168] "Request Body" body=""
	I1210 05:49:45.170150   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:45.171110   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:45.669844   51953 type.go:168] "Request Body" body=""
	I1210 05:49:45.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:45.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:46.169931   51953 type.go:168] "Request Body" body=""
	I1210 05:49:46.170025   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:46.170316   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:46.170369   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:46.670055   51953 type.go:168] "Request Body" body=""
	I1210 05:49:46.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:46.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:47.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:49:47.169900   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:47.170277   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:47.669758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:47.669844   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:47.670170   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:48.169861   51953 type.go:168] "Request Body" body=""
	I1210 05:49:48.169933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:48.170293   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:48.669823   51953 type.go:168] "Request Body" body=""
	I1210 05:49:48.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:48.670185   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:48.670239   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:49.170189   51953 type.go:168] "Request Body" body=""
	I1210 05:49:49.170282   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:49.170581   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:49.670519   51953 type.go:168] "Request Body" body=""
	I1210 05:49:49.670591   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:49.670933   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:50.170751   51953 type.go:168] "Request Body" body=""
	I1210 05:49:50.170838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:50.171163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:50.669768   51953 type.go:168] "Request Body" body=""
	I1210 05:49:50.669842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:50.670163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:51.169874   51953 type.go:168] "Request Body" body=""
	I1210 05:49:51.169945   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:51.170294   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:51.170350   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:51.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:49:51.669925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:51.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:52.169785   51953 type.go:168] "Request Body" body=""
	I1210 05:49:52.169868   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:52.170166   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:52.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:49:52.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:52.670278   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:53.170002   51953 type.go:168] "Request Body" body=""
	I1210 05:49:53.170083   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:53.170428   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:53.170482   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:53.670134   51953 type.go:168] "Request Body" body=""
	I1210 05:49:53.670209   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:53.670537   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:54.170330   51953 type.go:168] "Request Body" body=""
	I1210 05:49:54.170403   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:54.170997   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:54.669762   51953 type.go:168] "Request Body" body=""
	I1210 05:49:54.669840   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:54.670157   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:55.170437   51953 type.go:168] "Request Body" body=""
	I1210 05:49:55.170508   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:55.170825   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:55.170879   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:55.670656   51953 type.go:168] "Request Body" body=""
	I1210 05:49:55.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:55.671067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:56.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:49:56.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:56.170163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:56.670631   51953 type.go:168] "Request Body" body=""
	I1210 05:49:56.670708   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:56.671047   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:57.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:49:57.169826   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:57.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:57.669853   51953 type.go:168] "Request Body" body=""
	I1210 05:49:57.669923   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:57.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:57.670309   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:58.169747   51953 type.go:168] "Request Body" body=""
	I1210 05:49:58.169817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:58.170122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:58.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:49:58.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:58.670275   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:59.170063   51953 type.go:168] "Request Body" body=""
	I1210 05:49:59.170156   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:59.170502   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:59.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:49:59.670792   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:59.671123   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:59.671171   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:00.169945   51953 type.go:168] "Request Body" body=""
	I1210 05:50:00.170054   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:00.170391   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:00.670293   51953 type.go:168] "Request Body" body=""
	I1210 05:50:00.670372   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:00.670734   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:01.170379   51953 type.go:168] "Request Body" body=""
	I1210 05:50:01.170445   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:01.170785   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:01.670657   51953 type.go:168] "Request Body" body=""
	I1210 05:50:01.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:01.671101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:02.169838   51953 type.go:168] "Request Body" body=""
	I1210 05:50:02.169916   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:02.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:02.170292   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:02.670641   51953 type.go:168] "Request Body" body=""
	I1210 05:50:02.670714   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:02.671049   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:03.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:50:03.169821   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:03.170173   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:03.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:03.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:03.670257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:04.169808   51953 type.go:168] "Request Body" body=""
	I1210 05:50:04.169878   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:04.170170   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:04.670153   51953 type.go:168] "Request Body" body=""
	I1210 05:50:04.670227   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:04.670558   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:04.670612   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:05.170389   51953 type.go:168] "Request Body" body=""
	I1210 05:50:05.170463   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:05.170790   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:05.670350   51953 type.go:168] "Request Body" body=""
	I1210 05:50:05.670419   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:05.670674   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:06.170479   51953 type.go:168] "Request Body" body=""
	I1210 05:50:06.170562   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:06.170930   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:06.670726   51953 type.go:168] "Request Body" body=""
	I1210 05:50:06.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:06.671141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:06.671199   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:07.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:50:07.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:07.170225   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:07.669823   51953 type.go:168] "Request Body" body=""
	I1210 05:50:07.669897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:07.670237   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:08.169822   51953 type.go:168] "Request Body" body=""
	I1210 05:50:08.169902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:08.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:08.669915   51953 type.go:168] "Request Body" body=""
	I1210 05:50:08.669997   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:08.670259   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:09.170295   51953 type.go:168] "Request Body" body=""
	I1210 05:50:09.170366   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:09.170686   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:09.170740   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:09.670199   51953 type.go:168] "Request Body" body=""
	I1210 05:50:09.670275   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:09.670611   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:10.170386   51953 type.go:168] "Request Body" body=""
	I1210 05:50:10.170464   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:10.170732   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:10.670493   51953 type.go:168] "Request Body" body=""
	I1210 05:50:10.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:10.670908   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:11.170688   51953 type.go:168] "Request Body" body=""
	I1210 05:50:11.170762   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:11.171109   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:11.171166   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:11.669753   51953 type.go:168] "Request Body" body=""
	I1210 05:50:11.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:11.670111   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:12.169788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:12.169865   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:12.170193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:12.669828   51953 type.go:168] "Request Body" body=""
	I1210 05:50:12.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:12.670272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:13.169792   51953 type.go:168] "Request Body" body=""
	I1210 05:50:13.169860   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:13.170133   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:13.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:50:13.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:13.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:13.670257   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:14.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:50:14.169893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:14.170243   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:14.670026   51953 type.go:168] "Request Body" body=""
	I1210 05:50:14.670100   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:14.670363   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:15.170050   51953 type.go:168] "Request Body" body=""
	I1210 05:50:15.170123   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:15.170471   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:15.670177   51953 type.go:168] "Request Body" body=""
	I1210 05:50:15.670250   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:15.670584   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:15.670636   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:16.170320   51953 type.go:168] "Request Body" body=""
	I1210 05:50:16.170389   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:16.170687   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:16.670498   51953 type.go:168] "Request Body" body=""
	I1210 05:50:16.670574   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:16.670936   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:17.170736   51953 type.go:168] "Request Body" body=""
	I1210 05:50:17.170817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:17.171164   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:17.670572   51953 type.go:168] "Request Body" body=""
	I1210 05:50:17.670637   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:17.670949   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:17.671005   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:18.169725   51953 type.go:168] "Request Body" body=""
	I1210 05:50:18.169795   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:18.170134   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:18.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:50:18.669953   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:18.670308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:19.170037   51953 type.go:168] "Request Body" body=""
	I1210 05:50:19.170104   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:19.170365   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:19.670201   51953 type.go:168] "Request Body" body=""
	I1210 05:50:19.670277   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:19.670610   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:20.170409   51953 type.go:168] "Request Body" body=""
	I1210 05:50:20.170484   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:20.170822   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:20.170877   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:20.670595   51953 type.go:168] "Request Body" body=""
	I1210 05:50:20.670666   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:20.670993   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:21.169731   51953 type.go:168] "Request Body" body=""
	I1210 05:50:21.169813   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:21.170125   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:21.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:21.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:21.670193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:22.170669   51953 type.go:168] "Request Body" body=""
	I1210 05:50:22.170747   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:22.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:22.171080   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:22.669728   51953 type.go:168] "Request Body" body=""
	I1210 05:50:22.669806   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:22.670127   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:23.169851   51953 type.go:168] "Request Body" body=""
	I1210 05:50:23.169933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:23.170302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:23.669972   51953 type.go:168] "Request Body" body=""
	I1210 05:50:23.670044   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:23.670358   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:24.170057   51953 type.go:168] "Request Body" body=""
	I1210 05:50:24.170129   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:24.170453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:24.670197   51953 type.go:168] "Request Body" body=""
	I1210 05:50:24.670272   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:24.670612   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:24.670670   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:25.170337   51953 type.go:168] "Request Body" body=""
	I1210 05:50:25.170410   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:25.170739   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:25.670502   51953 type.go:168] "Request Body" body=""
	I1210 05:50:25.670572   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:25.670902   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:26.170691   51953 type.go:168] "Request Body" body=""
	I1210 05:50:26.170764   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:26.171108   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:26.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:26.669867   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:26.670130   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:27.169817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:27.169897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:27.170246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:27.170300   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:27.669967   51953 type.go:168] "Request Body" body=""
	I1210 05:50:27.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:27.670392   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:28.169739   51953 type.go:168] "Request Body" body=""
	I1210 05:50:28.169822   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:28.170150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:28.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:50:28.669930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:28.670221   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:29.170249   51953 type.go:168] "Request Body" body=""
	I1210 05:50:29.170325   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:29.170644   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:29.170699   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:29.670163   51953 type.go:168] "Request Body" body=""
	I1210 05:50:29.670232   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:29.670555   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:30.170356   51953 type.go:168] "Request Body" body=""
	I1210 05:50:30.170428   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:30.170751   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:30.670538   51953 type.go:168] "Request Body" body=""
	I1210 05:50:30.670611   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:30.670921   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:31.170427   51953 type.go:168] "Request Body" body=""
	I1210 05:50:31.170500   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:31.170752   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:31.170791   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:31.670569   51953 type.go:168] "Request Body" body=""
	I1210 05:50:31.670653   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:31.670969   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:32.169710   51953 type.go:168] "Request Body" body=""
	I1210 05:50:32.169785   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:32.170083   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:32.670743   51953 type.go:168] "Request Body" body=""
	I1210 05:50:32.670820   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:32.671121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:33.169805   51953 type.go:168] "Request Body" body=""
	I1210 05:50:33.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:33.170255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:33.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:50:33.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:33.670229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:33.670285   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:34.169778   51953 type.go:168] "Request Body" body=""
	I1210 05:50:34.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:34.170189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:34.670104   51953 type.go:168] "Request Body" body=""
	I1210 05:50:34.670184   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:34.670511   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:35.169809   51953 type.go:168] "Request Body" body=""
	I1210 05:50:35.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:35.170193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:35.670544   51953 type.go:168] "Request Body" body=""
	I1210 05:50:35.670613   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:35.670878   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:35.670919   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:36.170712   51953 type.go:168] "Request Body" body=""
	I1210 05:50:36.170793   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:36.171084   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:36.669793   51953 type.go:168] "Request Body" body=""
	I1210 05:50:36.669873   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:36.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:37.169942   51953 type.go:168] "Request Body" body=""
	I1210 05:50:37.170016   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:37.170292   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:37.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:50:37.669902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:37.670220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:38.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:50:38.169910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:38.170283   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:38.170338   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:38.669845   51953 type.go:168] "Request Body" body=""
	I1210 05:50:38.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:38.670182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:39.170144   51953 type.go:168] "Request Body" body=""
	I1210 05:50:39.170220   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:39.170549   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:39.670142   51953 type.go:168] "Request Body" body=""
	I1210 05:50:39.670218   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:39.670527   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:40.170193   51953 type.go:168] "Request Body" body=""
	I1210 05:50:40.170274   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:40.170539   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:40.170603   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:40.670363   51953 type.go:168] "Request Body" body=""
	I1210 05:50:40.670438   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:40.670794   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:41.170587   51953 type.go:168] "Request Body" body=""
	I1210 05:50:41.170671   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:41.171005   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:41.669712   51953 type.go:168] "Request Body" body=""
	I1210 05:50:41.669800   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:41.670128   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:42.169860   51953 type.go:168] "Request Body" body=""
	I1210 05:50:42.169951   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:42.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:42.669840   51953 type.go:168] "Request Body" body=""
	I1210 05:50:42.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:42.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:42.670293   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:43.169767   51953 type.go:168] "Request Body" body=""
	I1210 05:50:43.169833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:43.170101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:43.669850   51953 type.go:168] "Request Body" body=""
	I1210 05:50:43.669921   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:43.670246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:44.169977   51953 type.go:168] "Request Body" body=""
	I1210 05:50:44.170071   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:44.170414   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:44.670140   51953 type.go:168] "Request Body" body=""
	I1210 05:50:44.670226   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:44.670613   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:44.670677   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:45.170475   51953 type.go:168] "Request Body" body=""
	I1210 05:50:45.170563   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:45.170891   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:45.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:50:45.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:45.670222   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:46.169767   51953 type.go:168] "Request Body" body=""
	I1210 05:50:46.169838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:46.170104   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:46.669827   51953 type.go:168] "Request Body" body=""
	I1210 05:50:46.669903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:46.670226   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:47.169875   51953 type.go:168] "Request Body" body=""
	I1210 05:50:47.169958   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:47.170385   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:47.170442   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:47.670688   51953 type.go:168] "Request Body" body=""
	I1210 05:50:47.670757   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:47.671081   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:48.169796   51953 type.go:168] "Request Body" body=""
	I1210 05:50:48.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:48.170206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:48.669926   51953 type.go:168] "Request Body" body=""
	I1210 05:50:48.670000   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:48.670320   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:49.170308   51953 type.go:168] "Request Body" body=""
	I1210 05:50:49.170376   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:49.170645   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:49.170686   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:49.670650   51953 type.go:168] "Request Body" body=""
	I1210 05:50:49.670726   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:49.671070   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:50.170763   51953 type.go:168] "Request Body" body=""
	I1210 05:50:50.170849   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:50.171249   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:50.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:50.669858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:50.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:51.169892   51953 type.go:168] "Request Body" body=""
	I1210 05:50:51.169972   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:51.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:51.670028   51953 type.go:168] "Request Body" body=""
	I1210 05:50:51.670106   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:51.670436   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:51.670489   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:52.170129   51953 type.go:168] "Request Body" body=""
	I1210 05:50:52.170197   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:52.170458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:52.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:50:52.669892   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:52.670248   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:53.169982   51953 type.go:168] "Request Body" body=""
	I1210 05:50:53.170060   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:53.170464   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:53.669749   51953 type.go:168] "Request Body" body=""
	I1210 05:50:53.669818   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:53.670085   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:54.169781   51953 type.go:168] "Request Body" body=""
	I1210 05:50:54.169858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:54.170182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:54.170236   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:54.669996   51953 type.go:168] "Request Body" body=""
	I1210 05:50:54.670086   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:54.670418   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:55.170104   51953 type.go:168] "Request Body" body=""
	I1210 05:50:55.170177   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:55.170449   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:55.669843   51953 type.go:168] "Request Body" body=""
	I1210 05:50:55.669923   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:55.670268   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:56.169975   51953 type.go:168] "Request Body" body=""
	I1210 05:50:56.170051   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:56.170388   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:56.170441   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:56.670095   51953 type.go:168] "Request Body" body=""
	I1210 05:50:56.670168   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:56.670482   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:57.169890   51953 type.go:168] "Request Body" body=""
	I1210 05:50:57.169984   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:57.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:57.669805   51953 type.go:168] "Request Body" body=""
	I1210 05:50:57.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:57.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:58.169774   51953 type.go:168] "Request Body" body=""
	I1210 05:50:58.169842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:58.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:58.669884   51953 type.go:168] "Request Body" body=""
	I1210 05:50:58.669959   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:58.670302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:58.670355   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:59.170179   51953 type.go:168] "Request Body" body=""
	I1210 05:50:59.170260   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:59.170621   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:59.670326   51953 type.go:168] "Request Body" body=""
	I1210 05:50:59.670400   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:59.670713   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:00.170597   51953 type.go:168] "Request Body" body=""
	I1210 05:51:00.170676   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:00.171062   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:00.670393   51953 type.go:168] "Request Body" body=""
	I1210 05:51:00.670470   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:00.670779   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:00.670859   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:01.170104   51953 type.go:168] "Request Body" body=""
	I1210 05:51:01.170176   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:01.170534   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:01.670409   51953 type.go:168] "Request Body" body=""
	I1210 05:51:01.670489   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:01.670793   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:02.170600   51953 type.go:168] "Request Body" body=""
	I1210 05:51:02.170681   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:02.171034   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:02.669707   51953 type.go:168] "Request Body" body=""
	I1210 05:51:02.669777   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:02.670050   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:03.169821   51953 type.go:168] "Request Body" body=""
	I1210 05:51:03.169928   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:03.170262   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:03.170336   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:03.669876   51953 type.go:168] "Request Body" body=""
	I1210 05:51:03.669952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:03.670257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:04.169761   51953 type.go:168] "Request Body" body=""
	I1210 05:51:04.169839   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:04.170116   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:04.670079   51953 type.go:168] "Request Body" body=""
	I1210 05:51:04.670153   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:04.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:05.170179   51953 type.go:168] "Request Body" body=""
	I1210 05:51:05.170252   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:05.170541   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:05.170585   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:05.670315   51953 type.go:168] "Request Body" body=""
	I1210 05:51:05.670404   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:05.670663   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:06.170465   51953 type.go:168] "Request Body" body=""
	I1210 05:51:06.170568   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:06.170894   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:06.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:51:06.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:06.671114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:07.169747   51953 type.go:168] "Request Body" body=""
	I1210 05:51:07.169823   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:07.170100   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:07.669844   51953 type.go:168] "Request Body" body=""
	I1210 05:51:07.669918   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:07.670193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:07.670235   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:08.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:51:08.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:08.170261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:08.669794   51953 type.go:168] "Request Body" body=""
	I1210 05:51:08.669861   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:08.670162   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:09.170071   51953 type.go:168] "Request Body" body=""
	I1210 05:51:09.170149   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:09.170495   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:09.670204   51953 type.go:168] "Request Body" body=""
	I1210 05:51:09.670276   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:09.670598   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:09.670653   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:10.169774   51953 type.go:168] "Request Body" body=""
	I1210 05:51:10.169872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:10.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:10.669859   51953 type.go:168] "Request Body" body=""
	I1210 05:51:10.669948   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:10.670307   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:11.169790   51953 type.go:168] "Request Body" body=""
	I1210 05:51:11.169861   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:11.170177   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:11.669711   51953 type.go:168] "Request Body" body=""
	I1210 05:51:11.669789   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:11.670102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:12.169688   51953 type.go:168] "Request Body" body=""
	I1210 05:51:12.169769   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:12.170083   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:12.170143   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:12.669821   51953 type.go:168] "Request Body" body=""
	I1210 05:51:12.669925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:12.670244   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:13.170587   51953 type.go:168] "Request Body" body=""
	I1210 05:51:13.170667   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:13.170931   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:13.670750   51953 type.go:168] "Request Body" body=""
	I1210 05:51:13.670830   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:13.671209   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:14.169828   51953 type.go:168] "Request Body" body=""
	I1210 05:51:14.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:14.170243   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:14.170295   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:14.670005   51953 type.go:168] "Request Body" body=""
	I1210 05:51:14.670076   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:14.670345   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:15.170019   51953 type.go:168] "Request Body" body=""
	I1210 05:51:15.170092   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:15.170405   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:15.670090   51953 type.go:168] "Request Body" body=""
	I1210 05:51:15.670164   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:15.670488   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:16.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:51:16.169830   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:16.170154   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:16.669884   51953 type.go:168] "Request Body" body=""
	I1210 05:51:16.669954   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:16.670321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:16.670378   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:17.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:51:17.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:17.170203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:17.669857   51953 type.go:168] "Request Body" body=""
	I1210 05:51:17.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:17.670182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:18.169839   51953 type.go:168] "Request Body" body=""
	I1210 05:51:18.169915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:18.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:18.669814   51953 type.go:168] "Request Body" body=""
	I1210 05:51:18.669894   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:18.670212   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:19.170018   51953 type.go:168] "Request Body" body=""
	I1210 05:51:19.170103   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:19.170385   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:19.170435   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:19.670213   51953 type.go:168] "Request Body" body=""
	I1210 05:51:19.670290   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:19.670634   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:20.170418   51953 type.go:168] "Request Body" body=""
	I1210 05:51:20.170505   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:20.170868   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:20.670496   51953 type.go:168] "Request Body" body=""
	I1210 05:51:20.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:20.670919   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:21.170726   51953 type.go:168] "Request Body" body=""
	I1210 05:51:21.170805   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:21.171135   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:21.171197   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:21.669812   51953 type.go:168] "Request Body" body=""
	I1210 05:51:21.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:21.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:22.169804   51953 type.go:168] "Request Body" body=""
	I1210 05:51:22.169880   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:22.170180   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:22.669830   51953 type.go:168] "Request Body" body=""
	I1210 05:51:22.669902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:22.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:23.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:51:23.169901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:23.170299   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:23.669871   51953 type.go:168] "Request Body" body=""
	I1210 05:51:23.669940   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:23.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:23.670238   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:24.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:51:24.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:24.170239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:24.670061   51953 type.go:168] "Request Body" body=""
	I1210 05:51:24.670134   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:24.670493   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:25.169972   51953 type.go:168] "Request Body" body=""
	I1210 05:51:25.170044   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:25.170325   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:25.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:51:25.669907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:25.670245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:25.670298   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:26.170334   51953 type.go:168] "Request Body" body=""
	I1210 05:51:26.170405   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:26.170720   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:26.670508   51953 type.go:168] "Request Body" body=""
	I1210 05:51:26.670574   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:26.670837   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:27.170610   51953 type.go:168] "Request Body" body=""
	I1210 05:51:27.170687   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:27.171034   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:27.670641   51953 type.go:168] "Request Body" body=""
	I1210 05:51:27.670716   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:27.671052   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:27.671107   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:28.170507   51953 type.go:168] "Request Body" body=""
	I1210 05:51:28.170587   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:28.170917   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:28.670721   51953 type.go:168] "Request Body" body=""
	I1210 05:51:28.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:28.671160   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:29.170214   51953 type.go:168] "Request Body" body=""
	I1210 05:51:29.170299   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:29.170609   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:29.670390   51953 type.go:168] "Request Body" body=""
	I1210 05:51:29.670459   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:29.670758   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:30.170562   51953 type.go:168] "Request Body" body=""
	I1210 05:51:30.170660   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:30.171075   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:30.171137   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:30.670749   51953 type.go:168] "Request Body" body=""
	I1210 05:51:30.670827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:30.671196   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:31.169761   51953 type.go:168] "Request Body" body=""
	I1210 05:51:31.169835   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:31.170191   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:31.669883   51953 type.go:168] "Request Body" body=""
	I1210 05:51:31.669961   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:31.670300   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:32.169880   51953 type.go:168] "Request Body" body=""
	I1210 05:51:32.169952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:32.170273   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:32.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:51:32.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:32.670089   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:32.670131   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:33.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:51:33.169851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:33.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:33.669798   51953 type.go:168] "Request Body" body=""
	I1210 05:51:33.669868   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:33.670181   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:34.169753   51953 type.go:168] "Request Body" body=""
	I1210 05:51:34.169825   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:34.170091   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:34.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:51:34.669907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:34.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:34.670277   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:35.169820   51953 type.go:168] "Request Body" body=""
	I1210 05:51:35.169944   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:35.170278   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:35.669951   51953 type.go:168] "Request Body" body=""
	I1210 05:51:35.670023   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:35.670279   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:36.169953   51953 type.go:168] "Request Body" body=""
	I1210 05:51:36.170025   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:36.170358   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:36.670063   51953 type.go:168] "Request Body" body=""
	I1210 05:51:36.670133   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:36.670464   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:36.670516   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:37.170022   51953 type.go:168] "Request Body" body=""
	I1210 05:51:37.170090   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:37.170409   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:37.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:51:37.669901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:37.670272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:38.169963   51953 type.go:168] "Request Body" body=""
	I1210 05:51:38.170040   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:38.170377   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:38.669747   51953 type.go:168] "Request Body" body=""
	I1210 05:51:38.669816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:38.670139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:39.170160   51953 type.go:168] "Request Body" body=""
	I1210 05:51:39.170230   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:39.170516   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:39.170556   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:39.670447   51953 type.go:168] "Request Body" body=""
	I1210 05:51:39.670519   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:39.670875   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:40.170718   51953 type.go:168] "Request Body" body=""
	I1210 05:51:40.170785   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:40.171078   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:40.670706   51953 type.go:168] "Request Body" body=""
	I1210 05:51:40.670781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:40.671102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:41.169776   51953 type.go:168] "Request Body" body=""
	I1210 05:51:41.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:41.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:41.670004   51953 type.go:168] "Request Body" body=""
	I1210 05:51:41.670161   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:41.670773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:41.670875   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:42.170771   51953 type.go:168] "Request Body" body=""
	I1210 05:51:42.170847   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:42.171213   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:42.669910   51953 type.go:168] "Request Body" body=""
	I1210 05:51:42.669982   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:42.670284   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:43.169986   51953 type.go:168] "Request Body" body=""
	I1210 05:51:43.170065   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:43.170327   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:43.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:51:43.669901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:43.670214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:44.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:51:44.169896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:44.170247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:44.170307   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:44.670143   51953 type.go:168] "Request Body" body=""
	I1210 05:51:44.670220   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:44.670489   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:45.170378   51953 type.go:168] "Request Body" body=""
	I1210 05:51:45.170522   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:45.171164   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:45.669904   51953 type.go:168] "Request Body" body=""
	I1210 05:51:45.669977   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:45.670266   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:46.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:51:46.169981   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:46.170247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:46.669982   51953 type.go:168] "Request Body" body=""
	I1210 05:51:46.670065   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:46.670412   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:46.670463   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:47.170121   51953 type.go:168] "Request Body" body=""
	I1210 05:51:47.170196   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:47.170526   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:47.670279   51953 type.go:168] "Request Body" body=""
	I1210 05:51:47.670353   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:47.670622   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:48.170397   51953 type.go:168] "Request Body" body=""
	I1210 05:51:48.170475   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:48.170792   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:48.670571   51953 type.go:168] "Request Body" body=""
	I1210 05:51:48.670649   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:48.670997   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:48.671073   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:49.170679   51953 type.go:168] "Request Body" body=""
	I1210 05:51:49.170753   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:49.171102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:49.670157   51953 type.go:168] "Request Body" body=""
	I1210 05:51:49.670228   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:49.670552   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:50.170360   51953 type.go:168] "Request Body" body=""
	I1210 05:51:50.170435   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:50.170752   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:50.670554   51953 type.go:168] "Request Body" body=""
	I1210 05:51:50.670636   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:50.670942   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:51.170729   51953 type.go:168] "Request Body" body=""
	I1210 05:51:51.170802   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:51.171139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:51.171187   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:51.669733   51953 type.go:168] "Request Body" body=""
	I1210 05:51:51.669807   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:51.670146   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:52.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:51:52.169929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:52.170284   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:52.669819   51953 type.go:168] "Request Body" body=""
	I1210 05:51:52.669893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:52.670207   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:53.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:51:53.169992   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:53.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:53.669991   51953 type.go:168] "Request Body" body=""
	I1210 05:51:53.670070   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:53.670340   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:53.670380   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:54.170031   51953 type.go:168] "Request Body" body=""
	I1210 05:51:54.170110   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:54.170441   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:54.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:51:54.670202   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:54.670529   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:55.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:51:55.169832   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:55.170177   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:55.669779   51953 type.go:168] "Request Body" body=""
	I1210 05:51:55.669851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:55.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:56.169901   51953 type.go:168] "Request Body" body=""
	I1210 05:51:56.169974   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:56.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:56.170373   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:56.669742   51953 type.go:168] "Request Body" body=""
	I1210 05:51:56.669816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:56.670103   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:57.169781   51953 type.go:168] "Request Body" body=""
	I1210 05:51:57.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:57.170181   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:57.669889   51953 type.go:168] "Request Body" body=""
	I1210 05:51:57.669965   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:57.670291   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:58.170689   51953 type.go:168] "Request Body" body=""
	I1210 05:51:58.170758   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:58.171080   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:58.171123   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:58.669796   51953 type.go:168] "Request Body" body=""
	I1210 05:51:58.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:58.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:59.170073   51953 type.go:168] "Request Body" body=""
	I1210 05:51:59.170150   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:59.170489   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:59.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:51:59.670202   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:59.670565   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:00.170445   51953 type.go:168] "Request Body" body=""
	I1210 05:52:00.170546   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:00.170880   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:00.669804   51953 type.go:168] "Request Body" body=""
	I1210 05:52:00.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:00.670208   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:00.670259   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:01.169772   51953 type.go:168] "Request Body" body=""
	I1210 05:52:01.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:01.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:01.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:52:01.670097   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:01.670433   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:02.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:52:02.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:02.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:02.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:52:02.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:02.670276   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:02.670355   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:03.170035   51953 type.go:168] "Request Body" body=""
	I1210 05:52:03.170108   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:03.170401   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:03.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:52:03.669890   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:03.670202   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:04.169758   51953 type.go:168] "Request Body" body=""
	I1210 05:52:04.169836   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:04.170117   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:04.670079   51953 type.go:168] "Request Body" body=""
	I1210 05:52:04.670164   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:04.670516   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:04.670563   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:05.169839   51953 type.go:168] "Request Body" body=""
	I1210 05:52:05.169935   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:05.170260   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:05.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:52:05.669823   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:05.670097   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:06.169800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:06.169870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:06.170195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:06.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:52:06.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:06.670297   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:07.169768   51953 type.go:168] "Request Body" body=""
	I1210 05:52:07.169841   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:07.170149   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:07.170196   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:07.669845   51953 type.go:168] "Request Body" body=""
	I1210 05:52:07.669915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:07.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:08.169973   51953 type.go:168] "Request Body" body=""
	I1210 05:52:08.170047   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:08.170399   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:08.670082   51953 type.go:168] "Request Body" body=""
	I1210 05:52:08.670165   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:08.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:09.170372   51953 type.go:168] "Request Body" body=""
	I1210 05:52:09.170444   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:09.170740   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:09.170790   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:09.670553   51953 type.go:168] "Request Body" body=""
	I1210 05:52:09.670631   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:09.670948   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:10.170667   51953 type.go:168] "Request Body" body=""
	I1210 05:52:10.170738   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:10.170996   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:10.669729   51953 type.go:168] "Request Body" body=""
	I1210 05:52:10.669805   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:10.670126   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:11.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:52:11.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:11.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:11.669912   51953 type.go:168] "Request Body" body=""
	I1210 05:52:11.669979   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:11.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:11.670280   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:12.169939   51953 type.go:168] "Request Body" body=""
	I1210 05:52:12.170014   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:12.170362   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:12.670078   51953 type.go:168] "Request Body" body=""
	I1210 05:52:12.670162   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:12.670514   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:13.169756   51953 type.go:168] "Request Body" body=""
	I1210 05:52:13.169834   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:13.170093   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:13.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:52:13.669896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:13.670227   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:14.169820   51953 type.go:168] "Request Body" body=""
	I1210 05:52:14.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:14.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:14.170294   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:14.670030   51953 type.go:168] "Request Body" body=""
	I1210 05:52:14.670095   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:14.670375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:15.170120   51953 type.go:168] "Request Body" body=""
	I1210 05:52:15.170196   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:15.170539   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:15.670302   51953 type.go:168] "Request Body" body=""
	I1210 05:52:15.670373   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:15.670676   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:16.170432   51953 type.go:168] "Request Body" body=""
	I1210 05:52:16.170507   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:16.170803   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:16.170857   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:16.670503   51953 type.go:168] "Request Body" body=""
	I1210 05:52:16.670576   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:16.670887   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:17.170709   51953 type.go:168] "Request Body" body=""
	I1210 05:52:17.170781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:17.171089   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:17.669750   51953 type.go:168] "Request Body" body=""
	I1210 05:52:17.669817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:17.670129   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:18.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:52:18.169909   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:18.170246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:18.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:52:18.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:18.670224   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:18.670276   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:19.170163   51953 type.go:168] "Request Body" body=""
	I1210 05:52:19.170242   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:19.170554   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:19.670487   51953 type.go:168] "Request Body" body=""
	I1210 05:52:19.670569   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:19.670973   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:20.169737   51953 type.go:168] "Request Body" body=""
	I1210 05:52:20.169824   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:20.170206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:20.669861   51953 type.go:168] "Request Body" body=""
	I1210 05:52:20.669938   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:20.670209   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:21.169833   51953 type.go:168] "Request Body" body=""
	I1210 05:52:21.169904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:21.170238   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:21.170290   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:21.669832   51953 type.go:168] "Request Body" body=""
	I1210 05:52:21.669911   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:21.670234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:22.169913   51953 type.go:168] "Request Body" body=""
	I1210 05:52:22.169983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:22.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:22.669812   51953 type.go:168] "Request Body" body=""
	I1210 05:52:22.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:22.670179   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:23.169847   51953 type.go:168] "Request Body" body=""
	I1210 05:52:23.169930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:23.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:23.669962   51953 type.go:168] "Request Body" body=""
	I1210 05:52:23.670037   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:23.670326   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:23.670367   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:24.170037   51953 type.go:168] "Request Body" body=""
	I1210 05:52:24.170109   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:24.170439   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:24.670168   51953 type.go:168] "Request Body" body=""
	I1210 05:52:24.670241   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:24.670573   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:25.170350   51953 type.go:168] "Request Body" body=""
	I1210 05:52:25.170421   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:25.170687   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:25.670431   51953 type.go:168] "Request Body" body=""
	I1210 05:52:25.670504   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:25.670821   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:25.670873   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:26.170481   51953 type.go:168] "Request Body" body=""
	I1210 05:52:26.170555   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:26.170912   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:26.670658   51953 type.go:168] "Request Body" body=""
	I1210 05:52:26.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:26.670998   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:27.169719   51953 type.go:168] "Request Body" body=""
	I1210 05:52:27.169797   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:27.170122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:27.669792   51953 type.go:168] "Request Body" body=""
	I1210 05:52:27.669870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:27.670184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:28.169778   51953 type.go:168] "Request Body" body=""
	I1210 05:52:28.169852   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:28.170172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:28.170229   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:28.669766   51953 type.go:168] "Request Body" body=""
	I1210 05:52:28.669838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:28.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:29.170045   51953 type.go:168] "Request Body" body=""
	I1210 05:52:29.170125   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:29.170415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:29.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:52:29.670193   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:29.670453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:30.170123   51953 type.go:168] "Request Body" body=""
	I1210 05:52:30.170199   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:30.170559   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:30.170635   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:30.670127   51953 type.go:168] "Request Body" body=""
	I1210 05:52:30.670200   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:30.670509   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:31.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:52:31.169839   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:31.170095   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:31.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:31.669875   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:31.670200   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:32.169800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:32.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:32.170198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:32.669769   51953 type.go:168] "Request Body" body=""
	I1210 05:52:32.669851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:32.670162   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:32.670212   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:33.169842   51953 type.go:168] "Request Body" body=""
	I1210 05:52:33.169915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:33.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:33.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:52:33.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:33.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:34.169925   51953 type.go:168] "Request Body" body=""
	I1210 05:52:34.170009   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:34.170331   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:34.670116   51953 type.go:168] "Request Body" body=""
	I1210 05:52:34.670194   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:34.670515   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:34.670571   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:35.170367   51953 type.go:168] "Request Body" body=""
	I1210 05:52:35.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:35.170782   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:35.670577   51953 type.go:168] "Request Body" body=""
	I1210 05:52:35.670647   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:35.670912   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:36.170722   51953 type.go:168] "Request Body" body=""
	I1210 05:52:36.170802   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:36.171183   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:36.669843   51953 type.go:168] "Request Body" body=""
	I1210 05:52:36.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:36.670291   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:37.170702   51953 type.go:168] "Request Body" body=""
	I1210 05:52:37.170771   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:37.171105   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:37.171165   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:37.669824   51953 type.go:168] "Request Body" body=""
	I1210 05:52:37.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:37.670242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:38.169838   51953 type.go:168] "Request Body" body=""
	I1210 05:52:38.169909   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:38.170276   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:38.669712   51953 type.go:168] "Request Body" body=""
	I1210 05:52:38.669779   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:38.670087   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:39.169961   51953 type.go:168] "Request Body" body=""
	I1210 05:52:39.170037   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:39.170366   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:39.670236   51953 type.go:168] "Request Body" body=""
	I1210 05:52:39.670306   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:39.670633   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:39.670687   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:40.170413   51953 type.go:168] "Request Body" body=""
	I1210 05:52:40.170482   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:40.170769   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:40.670553   51953 type.go:168] "Request Body" body=""
	I1210 05:52:40.670623   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:40.670995   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:41.169710   51953 type.go:168] "Request Body" body=""
	I1210 05:52:41.169781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:41.170119   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:41.669770   51953 type.go:168] "Request Body" body=""
	I1210 05:52:41.669843   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:41.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:42.169882   51953 type.go:168] "Request Body" body=""
	I1210 05:52:42.169973   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:42.170386   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:42.170462   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:42.670156   51953 type.go:168] "Request Body" body=""
	I1210 05:52:42.670236   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:42.670580   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:43.170323   51953 type.go:168] "Request Body" body=""
	I1210 05:52:43.170400   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:43.170660   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:43.670466   51953 type.go:168] "Request Body" body=""
	I1210 05:52:43.670547   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:43.670868   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:44.170674   51953 type.go:168] "Request Body" body=""
	I1210 05:52:44.170752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:44.171091   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:44.171150   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:44.670034   51953 type.go:168] "Request Body" body=""
	I1210 05:52:44.670107   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:44.670403   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:45.170340   51953 type.go:168] "Request Body" body=""
	I1210 05:52:45.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:45.170941   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:45.670006   51953 type.go:168] "Request Body" body=""
	I1210 05:52:45.670100   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:45.670436   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:46.169753   51953 type.go:168] "Request Body" body=""
	I1210 05:52:46.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:46.170099   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:46.669811   51953 type.go:168] "Request Body" body=""
	I1210 05:52:46.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:46.670235   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:46.670295   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:47.169851   51953 type.go:168] "Request Body" body=""
	I1210 05:52:47.169946   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:47.170312   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:47.669774   51953 type.go:168] "Request Body" body=""
	I1210 05:52:47.669840   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:47.670114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:48.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:52:48.169897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:48.170265   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:48.669972   51953 type.go:168] "Request Body" body=""
	I1210 05:52:48.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:48.670364   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:48.670428   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:49.170294   51953 type.go:168] "Request Body" body=""
	I1210 05:52:49.170370   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:49.170634   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:49.670676   51953 type.go:168] "Request Body" body=""
	I1210 05:52:49.670754   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:49.671078   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:50.169788   51953 type.go:168] "Request Body" body=""
	I1210 05:52:50.169866   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:50.170203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:50.669782   51953 type.go:168] "Request Body" body=""
	I1210 05:52:50.669858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:50.670155   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:51.169834   51953 type.go:168] "Request Body" body=""
	I1210 05:52:51.169907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:51.170263   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:51.170321   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:51.670020   51953 type.go:168] "Request Body" body=""
	I1210 05:52:51.670091   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:51.670371   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:52.170061   51953 type.go:168] "Request Body" body=""
	I1210 05:52:52.170132   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:52.170447   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:52.669850   51953 type.go:168] "Request Body" body=""
	I1210 05:52:52.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:52.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:53.169932   51953 type.go:168] "Request Body" body=""
	I1210 05:52:53.170009   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:53.170341   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:53.170397   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:53.670032   51953 type.go:168] "Request Body" body=""
	I1210 05:52:53.670105   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:53.670414   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:54.169881   51953 type.go:168] "Request Body" body=""
	I1210 05:52:54.169952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:54.170302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:54.670115   51953 type.go:168] "Request Body" body=""
	I1210 05:52:54.670186   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:54.670529   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:55.170256   51953 type.go:168] "Request Body" body=""
	I1210 05:52:55.170339   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:55.170657   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:55.170714   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:55.670524   51953 type.go:168] "Request Body" body=""
	I1210 05:52:55.670595   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:55.670950   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:56.170826   51953 type.go:168] "Request Body" body=""
	I1210 05:52:56.170903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:56.171240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:56.669759   51953 type.go:168] "Request Body" body=""
	I1210 05:52:56.669835   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:56.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:57.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:52:57.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:57.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:57.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:52:57.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:57.670255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:57.670314   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:58.170422   51953 type.go:168] "Request Body" body=""
	I1210 05:52:58.170495   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:58.170808   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:58.670565   51953 type.go:168] "Request Body" body=""
	I1210 05:52:58.670637   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:58.670958   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:59.170654   51953 type.go:168] "Request Body" body=""
	I1210 05:52:59.170728   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:59.171071   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:59.669939   51953 type.go:168] "Request Body" body=""
	I1210 05:52:59.670007   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:59.670301   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:59.670342   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:00.169895   51953 type.go:168] "Request Body" body=""
	I1210 05:53:00.169990   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:00.170313   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:00.669978   51953 type.go:168] "Request Body" body=""
	I1210 05:53:00.670066   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:00.670428   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:01.169993   51953 type.go:168] "Request Body" body=""
	I1210 05:53:01.170074   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:01.170371   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:01.669848   51953 type.go:168] "Request Body" body=""
	I1210 05:53:01.669929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:01.670400   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:01.670459   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:02.169969   51953 type.go:168] "Request Body" body=""
	I1210 05:53:02.170067   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:02.170453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:02.670163   51953 type.go:168] "Request Body" body=""
	I1210 05:53:02.670231   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:02.670544   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:03.170337   51953 type.go:168] "Request Body" body=""
	I1210 05:53:03.170414   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:03.170717   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:03.670398   51953 type.go:168] "Request Body" body=""
	I1210 05:53:03.670477   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:03.670784   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:03.670830   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:04.170568   51953 type.go:168] "Request Body" body=""
	I1210 05:53:04.170643   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:04.170962   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:04.669739   51953 type.go:168] "Request Body" body=""
	I1210 05:53:04.669820   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:04.670122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:05.169736   51953 type.go:168] "Request Body" body=""
	I1210 05:53:05.169812   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:05.170142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:05.669731   51953 type.go:168] "Request Body" body=""
	I1210 05:53:05.669801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:05.670088   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:06.169847   51953 type.go:168] "Request Body" body=""
	I1210 05:53:06.169919   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:06.170255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:06.170309   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:06.669839   51953 type.go:168] "Request Body" body=""
	I1210 05:53:06.669932   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:06.670258   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:07.169929   51953 type.go:168] "Request Body" body=""
	I1210 05:53:07.170019   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:07.170306   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:07.670053   51953 type.go:168] "Request Body" body=""
	I1210 05:53:07.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:07.670495   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:08.170204   51953 type.go:168] "Request Body" body=""
	I1210 05:53:08.170277   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:08.170612   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:08.170669   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:08.670407   51953 type.go:168] "Request Body" body=""
	I1210 05:53:08.670480   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:08.670802   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:09.170597   51953 type.go:168] "Request Body" body=""
	I1210 05:53:09.170672   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:09.171003   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:09.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:53:09.669899   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:09.670277   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:10.169910   51953 type.go:168] "Request Body" body=""
	I1210 05:53:10.169986   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:10.170270   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:10.669995   51953 type.go:168] "Request Body" body=""
	I1210 05:53:10.670072   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:10.670375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:10.670419   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:11.169900   51953 type.go:168] "Request Body" body=""
	I1210 05:53:11.169978   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:11.170309   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:11.669761   51953 type.go:168] "Request Body" body=""
	I1210 05:53:11.669842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:11.670160   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:12.169855   51953 type.go:168] "Request Body" body=""
	I1210 05:53:12.169929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:12.170311   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:12.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:53:12.669877   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:12.670203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:13.169791   51953 type.go:168] "Request Body" body=""
	I1210 05:53:13.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:13.170213   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:13.170268   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:13.669861   51953 type.go:168] "Request Body" body=""
	I1210 05:53:13.669935   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:13.670288   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:14.169996   51953 type.go:168] "Request Body" body=""
	I1210 05:53:14.170071   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:14.170413   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:14.670109   51953 type.go:168] "Request Body" body=""
	I1210 05:53:14.670182   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:14.670458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:15.169830   51953 type.go:168] "Request Body" body=""
	I1210 05:53:15.169954   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:15.170288   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:15.170343   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:15.669901   51953 type.go:168] "Request Body" body=""
	I1210 05:53:15.669977   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:15.670322   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:16.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:53:16.169825   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:16.170141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:16.669852   51953 type.go:168] "Request Body" body=""
	I1210 05:53:16.669933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:16.670259   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:17.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:53:17.169896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:17.170232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:17.669789   51953 type.go:168] "Request Body" body=""
	I1210 05:53:17.669855   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:17.670121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:17.670179   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:18.169880   51953 type.go:168] "Request Body" body=""
	I1210 05:53:18.169953   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:18.170308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:18.670029   51953 type.go:168] "Request Body" body=""
	I1210 05:53:18.670125   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:18.670459   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:19.170387   51953 type.go:168] "Request Body" body=""
	I1210 05:53:19.170452   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:19.170715   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:19.670679   51953 type.go:168] "Request Body" body=""
	I1210 05:53:19.670747   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:19.671074   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:19.671133   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:20.169842   51953 type.go:168] "Request Body" body=""
	I1210 05:53:20.169925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:20.170257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:20.669763   51953 type.go:168] "Request Body" body=""
	I1210 05:53:20.669856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:20.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:21.169837   51953 type.go:168] "Request Body" body=""
	I1210 05:53:21.169912   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:21.170234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:21.669987   51953 type.go:168] "Request Body" body=""
	I1210 05:53:21.670060   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:21.670390   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:22.170082   51953 type.go:168] "Request Body" body=""
	I1210 05:53:22.170158   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:22.170445   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:22.170499   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:22.669862   51953 type.go:168] "Request Body" body=""
	I1210 05:53:22.669943   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:22.670295   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:23.169959   51953 type.go:168] "Request Body" body=""
	I1210 05:53:23.170036   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:23.170370   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:23.669779   51953 type.go:168] "Request Body" body=""
	I1210 05:53:23.669847   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:23.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:24.169812   51953 type.go:168] "Request Body" body=""
	I1210 05:53:24.169882   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:24.170274   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:24.670128   51953 type.go:168] "Request Body" body=""
	I1210 05:53:24.670208   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:24.670549   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:24.670605   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:25.170307   51953 type.go:168] "Request Body" body=""
	I1210 05:53:25.170382   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:25.170719   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:25.670478   51953 type.go:168] "Request Body" body=""
	I1210 05:53:25.670547   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:25.670828   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:26.170599   51953 type.go:168] "Request Body" body=""
	I1210 05:53:26.170671   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:26.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:26.669709   51953 type.go:168] "Request Body" body=""
	I1210 05:53:26.669782   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:26.670054   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:27.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:53:27.169831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:27.170139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:27.170198   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:27.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:53:27.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:27.670219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:28.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:53:28.169828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:28.170132   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:28.669819   51953 type.go:168] "Request Body" body=""
	I1210 05:53:28.669888   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:28.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:29.170163   51953 type.go:168] "Request Body" body=""
	I1210 05:53:29.170243   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:29.170572   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:29.170631   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:29.670270   51953 type.go:168] "Request Body" body=""
	I1210 05:53:29.670337   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:29.670607   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:30.170496   51953 type.go:168] "Request Body" body=""
	I1210 05:53:30.170584   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:30.170947   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:30.670768   51953 type.go:168] "Request Body" body=""
	I1210 05:53:30.670854   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:30.671206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:31.169798   51953 type.go:168] "Request Body" body=""
	I1210 05:53:31.169872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:31.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:31.669871   51953 type.go:168] "Request Body" body=""
	I1210 05:53:31.669951   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:31.670321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:31.670376   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:32.169884   51953 type.go:168] "Request Body" body=""
	I1210 05:53:32.169959   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:32.170251   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:32.669923   51953 type.go:168] "Request Body" body=""
	I1210 05:53:32.669991   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:32.670335   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:33.169817   51953 type.go:168] "Request Body" body=""
	I1210 05:53:33.169889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:33.170220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:33.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:53:33.669885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:33.670198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:34.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:53:34.169833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:34.170101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:34.170150   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:34.670055   51953 type.go:168] "Request Body" body=""
	I1210 05:53:34.670124   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:34.670458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:35.170164   51953 type.go:168] "Request Body" body=""
	I1210 05:53:35.170239   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:35.170615   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:35.670406   51953 type.go:168] "Request Body" body=""
	I1210 05:53:35.670480   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:35.670747   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:36.170521   51953 type.go:168] "Request Body" body=""
	I1210 05:53:36.170600   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:36.170924   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:36.170976   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:36.670598   51953 type.go:168] "Request Body" body=""
	I1210 05:53:36.670673   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:36.671006   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:37.170525   51953 type.go:168] "Request Body" body=""
	I1210 05:53:37.170598   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:37.170929   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:37.670698   51953 type.go:168] "Request Body" body=""
	I1210 05:53:37.670771   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:37.671111   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:38.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:53:38.169821   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:38.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:38.670414   51953 type.go:168] "Request Body" body=""
	I1210 05:53:38.670482   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:38.670791   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:38.670843   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:39.170611   51953 type.go:168] "Request Body" body=""
	I1210 05:53:39.170682   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:39.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:39.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:53:39.669833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:39.670145   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:40.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:53:40.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:40.170087   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:40.669801   51953 type.go:168] "Request Body" body=""
	I1210 05:53:40.669881   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:40.670219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:41.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:53:41.169995   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:41.170355   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:41.170412   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:41.670056   51953 type.go:168] "Request Body" body=""
	I1210 05:53:41.670122   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:41.670440   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:42.169858   51953 type.go:168] "Request Body" body=""
	I1210 05:53:42.169947   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:42.170336   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:42.670088   51953 type.go:168] "Request Body" body=""
	I1210 05:53:42.670163   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:42.670484   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:43.170162   51953 type.go:168] "Request Body" body=""
	I1210 05:53:43.170230   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:43.170547   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:43.170609   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:43.670381   51953 type.go:168] "Request Body" body=""
	I1210 05:53:43.670459   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:43.670797   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:44.170478   51953 type.go:168] "Request Body" body=""
	I1210 05:53:44.170553   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:44.170917   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:44.670710   51953 type.go:168] "Request Body" body=""
	I1210 05:53:44.670781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:44.671096   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:45.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:53:45.169927   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:45.170248   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:45.670167   51953 type.go:168] "Request Body" body=""
	I1210 05:53:45.670243   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:45.670596   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:45.670654   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:46.170356   51953 type.go:168] "Request Body" body=""
	I1210 05:53:46.170470   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:46.170775   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:46.670631   51953 type.go:168] "Request Body" body=""
	I1210 05:53:46.670706   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:46.671056   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:47.169777   51953 type.go:168] "Request Body" body=""
	I1210 05:53:47.169864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:47.170180   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:47.670484   51953 type.go:168] "Request Body" body=""
	I1210 05:53:47.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:47.670850   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:47.670896   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:48.170703   51953 type.go:168] "Request Body" body=""
	I1210 05:53:48.170773   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:48.171186   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:48.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:53:48.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:48.670270   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:49.170239   51953 type.go:168] "Request Body" body=""
	I1210 05:53:49.170314   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:49.170632   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:49.670158   51953 type.go:168] "Request Body" body=""
	I1210 05:53:49.670250   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:49.670638   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:50.170456   51953 type.go:168] "Request Body" body=""
	I1210 05:53:50.170536   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:50.170897   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:50.170949   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:50.670681   51953 type.go:168] "Request Body" body=""
	I1210 05:53:50.670750   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:50.671080   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:51.169790   51953 type.go:168] "Request Body" body=""
	I1210 05:53:51.169870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:51.170201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:51.669911   51953 type.go:168] "Request Body" body=""
	I1210 05:53:51.669983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:51.670289   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:52.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:53:52.169885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:52.170158   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:52.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:53:52.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:52.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:52.670299   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:53.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:53:53.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:53.170220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:53.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:53:53.669864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:53.670142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:54.169882   51953 type.go:168] "Request Body" body=""
	I1210 05:53:54.169960   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:54.170294   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:54.670138   51953 type.go:168] "Request Body" body=""
	I1210 05:53:54.670217   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:54.670514   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:54.670556   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:55.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:53:55.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:55.170092   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:55.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:53:55.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:55.670235   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:56.169933   51953 type.go:168] "Request Body" body=""
	I1210 05:53:56.170005   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:56.170326   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:56.669987   51953 type.go:168] "Request Body" body=""
	I1210 05:53:56.670052   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:56.670317   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:57.170000   51953 type.go:168] "Request Body" body=""
	I1210 05:53:57.170105   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:57.170463   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:57.170520   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:57.670190   51953 type.go:168] "Request Body" body=""
	I1210 05:53:57.670263   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:57.670595   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:58.170369   51953 type.go:168] "Request Body" body=""
	I1210 05:53:58.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:58.170773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:58.670583   51953 type.go:168] "Request Body" body=""
	I1210 05:53:58.670669   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:58.671047   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:59.170051   51953 type.go:168] "Request Body" body=""
	I1210 05:53:59.170137   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:59.170479   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:59.170549   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:59.669763   51953 type.go:168] "Request Body" body=""
	I1210 05:53:59.669831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:59.670493   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:54:00.169895   51953 type.go:168] "Request Body" body=""
	I1210 05:54:00.170210   51953 node_ready.go:38] duration metric: took 6m0.000621671s for node "functional-644034" to be "Ready" ...
	I1210 05:54:00.173449   51953 out.go:203] 
	W1210 05:54:00.176680   51953 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 05:54:00.176713   51953 out.go:285] * 
	W1210 05:54:00.178858   51953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:54:00.215003   51953 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506429728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506448067Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506489134Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506504420Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506514545Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506527788Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506537643Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506548720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506564646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506593873Z" level=info msg="Connect containerd service"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.506912364Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.507519251Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.527118026Z" level=info msg="Start subscribing containerd event"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.527202514Z" level=info msg="Start recovering state"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.530801717Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.530884449Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549808867Z" level=info msg="Start event monitor"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549866450Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549876329Z" level=info msg="Start streaming server"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549885511Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549893150Z" level=info msg="runtime interface starting up..."
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549899164Z" level=info msg="starting plugins..."
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.549910865Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 05:47:57 functional-644034 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 05:47:57 functional-644034 containerd[5850]: time="2025-12-10T05:47:57.551142386Z" level=info msg="containerd successfully booted in 0.065614s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:54:04.821358    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:04.821865    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:04.823507    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:04.824011    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:04.825534    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 05:54:04 up 36 min,  0 user,  load average: 0.28, 0.36, 0.58
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 05:54:01 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:01 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 10 05:54:01 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:01 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:01 functional-644034 kubelet[8945]: E1210 05:54:01.979120    8945 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:01 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:01 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:02 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 10 05:54:02 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:02 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:02 functional-644034 kubelet[9043]: E1210 05:54:02.719529    9043 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:02 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:02 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:03 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 10 05:54:03 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:03 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:03 functional-644034 kubelet[9062]: E1210 05:54:03.453881    9062 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:03 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:03 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:04 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 10 05:54:04 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:04 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:04 functional-644034 kubelet[9085]: E1210 05:54:04.225734    9085 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:04 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:04 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (389.219123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 kubectl -- --context functional-644034 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 kubectl -- --context functional-644034 get pods: exit status 1 (107.364128ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-644034 kubectl -- --context functional-644034 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 2 (310.517148ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-944360 image ls --format short --alsologtostderr                                                                                           │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image ls --format yaml --alsologtostderr                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh     │ functional-944360 ssh pgrep buildkitd                                                                                                                 │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ image   │ functional-944360 image ls --format json --alsologtostderr                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image ls --format table --alsologtostderr                                                                                           │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image build -t localhost/my-image:functional-944360 testdata/build --alsologtostderr                                                │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image ls                                                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ delete  │ -p functional-944360                                                                                                                                  │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ start   │ -p functional-644034 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ start   │ -p functional-644034 --alsologtostderr -v=8                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │                     │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:latest                                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add minikube-local-cache-test:functional-644034                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache delete minikube-local-cache-test:functional-644034                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl images                                                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	│ cache   │ functional-644034 cache reload                                                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ kubectl │ functional-644034 kubectl -- --context functional-644034 get pods                                                                                     │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:47:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:47:54.556574   51953 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:47:54.556774   51953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:54.556804   51953 out.go:374] Setting ErrFile to fd 2...
	I1210 05:47:54.556824   51953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:54.557680   51953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:47:54.558123   51953 out.go:368] Setting JSON to false
	I1210 05:47:54.558985   51953 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1825,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:47:54.559094   51953 start.go:143] virtualization:  
	I1210 05:47:54.562634   51953 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:47:54.566518   51953 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:47:54.566592   51953 notify.go:221] Checking for updates...
	I1210 05:47:54.572379   51953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:47:54.575335   51953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:54.578363   51953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:47:54.581210   51953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:47:54.584186   51953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:47:54.587618   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:54.587759   51953 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:47:54.618368   51953 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:47:54.618493   51953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:47:54.683662   51953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 05:47:54.67215006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:47:54.683767   51953 docker.go:319] overlay module found
	I1210 05:47:54.686996   51953 out.go:179] * Using the docker driver based on existing profile
	I1210 05:47:54.689865   51953 start.go:309] selected driver: docker
	I1210 05:47:54.689883   51953 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:54.689998   51953 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:47:54.690096   51953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:47:54.769093   51953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 05:47:54.760185758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:47:54.769542   51953 cni.go:84] Creating CNI manager for ""
	I1210 05:47:54.769597   51953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:47:54.769652   51953 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:54.772754   51953 out.go:179] * Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	I1210 05:47:54.775504   51953 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 05:47:54.778330   51953 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:47:54.781109   51953 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:47:54.781186   51953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:47:54.800171   51953 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:47:54.800192   51953 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 05:47:54.839003   51953 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 05:47:55.003206   51953 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 05:47:55.003455   51953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json ...
	I1210 05:47:55.003769   51953 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:47:55.003826   51953 start.go:360] acquireMachinesLock for functional-644034: {Name:mk0dde6f976baac8ab90670ad27c806ab702c4c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.003903   51953 start.go:364] duration metric: took 49.001µs to acquireMachinesLock for "functional-644034"
	I1210 05:47:55.003933   51953 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:47:55.003940   51953 fix.go:54] fixHost starting: 
	I1210 05:47:55.004094   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.004258   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:55.028659   51953 fix.go:112] recreateIfNeeded on functional-644034: state=Running err=<nil>
	W1210 05:47:55.028694   51953 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:47:55.031932   51953 out.go:252] * Updating the running docker "functional-644034" container ...
	I1210 05:47:55.031977   51953 machine.go:94] provisionDockerMachine start ...
	I1210 05:47:55.032062   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.055133   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.055465   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.055479   51953 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:47:55.170848   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.207999   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:47:55.208023   51953 ubuntu.go:182] provisioning hostname "functional-644034"
	I1210 05:47:55.208102   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.228767   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.229073   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.229085   51953 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-644034 && echo "functional-644034" | sudo tee /etc/hostname
	I1210 05:47:55.357858   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.390746   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:47:55.390831   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.434495   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.434811   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.434828   51953 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-644034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644034/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-644034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:47:55.523319   51953 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523359   51953 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523419   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:47:55.523430   51953 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 131.759µs
	I1210 05:47:55.523435   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 05:47:55.523445   51953 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 87.246µs
	I1210 05:47:55.523453   51953 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523438   51953 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:47:55.523449   51953 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523467   51953 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523481   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:47:55.523488   51953 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.262µs
	I1210 05:47:55.523494   51953 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:47:55.523503   51953 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523523   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 05:47:55.523531   51953 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 65.428µs
	I1210 05:47:55.523538   51953 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523542   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 05:47:55.523548   51953 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 45.473µs
	I1210 05:47:55.523554   51953 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 05:47:55.523548   51953 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523565   51953 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523317   51953 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523599   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 05:47:55.523607   51953 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.7µs
	I1210 05:47:55.523610   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 05:47:55.523613   51953 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 05:47:55.523600   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 05:47:55.523617   51953 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 70.203µs
	I1210 05:47:55.523622   51953 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 325.49µs
	I1210 05:47:55.523626   51953 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523628   51953 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523644   51953 cache.go:87] Successfully saved all images to host disk.
	I1210 05:47:55.587205   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:47:55.587232   51953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 05:47:55.587288   51953 ubuntu.go:190] setting up certificates
	I1210 05:47:55.587298   51953 provision.go:84] configureAuth start
	I1210 05:47:55.587369   51953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:47:55.604738   51953 provision.go:143] copyHostCerts
	I1210 05:47:55.604778   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:47:55.604816   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 05:47:55.604828   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:47:55.604905   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 05:47:55.605000   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:47:55.605022   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 05:47:55.605029   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:47:55.605061   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 05:47:55.605114   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:47:55.605134   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 05:47:55.605139   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:47:55.605169   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 05:47:55.605233   51953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.functional-644034 san=[127.0.0.1 192.168.49.2 functional-644034 localhost minikube]
	I1210 05:47:55.781276   51953 provision.go:177] copyRemoteCerts
	I1210 05:47:55.781365   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:47:55.781432   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.797956   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:55.902711   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 05:47:55.902771   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:47:55.919779   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 05:47:55.919840   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:47:55.936935   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 05:47:55.936994   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:47:55.953689   51953 provision.go:87] duration metric: took 366.363656ms to configureAuth
	I1210 05:47:55.953721   51953 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:47:55.953915   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:55.953927   51953 machine.go:97] duration metric: took 921.944178ms to provisionDockerMachine
	I1210 05:47:55.953936   51953 start.go:293] postStartSetup for "functional-644034" (driver="docker")
	I1210 05:47:55.953952   51953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:47:55.954004   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:47:55.954054   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.971130   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.075277   51953 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:47:56.078673   51953 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 05:47:56.078694   51953 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 05:47:56.078699   51953 command_runner.go:130] > VERSION_ID="12"
	I1210 05:47:56.078704   51953 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 05:47:56.078708   51953 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 05:47:56.078712   51953 command_runner.go:130] > ID=debian
	I1210 05:47:56.078717   51953 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 05:47:56.078725   51953 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 05:47:56.078732   51953 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 05:47:56.078800   51953 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:47:56.078828   51953 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:47:56.078840   51953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 05:47:56.078899   51953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 05:47:56.078986   51953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 05:47:56.078998   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1210 05:47:56.079103   51953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> hosts in /etc/test/nested/copy/4116
	I1210 05:47:56.079112   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> /etc/test/nested/copy/4116/hosts
	I1210 05:47:56.079156   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4116
	I1210 05:47:56.086554   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:47:56.104005   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts --> /etc/test/nested/copy/4116/hosts (40 bytes)
	I1210 05:47:56.121596   51953 start.go:296] duration metric: took 167.644644ms for postStartSetup
	I1210 05:47:56.121686   51953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:47:56.121728   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.138924   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.243468   51953 command_runner.go:130] > 14%
	I1210 05:47:56.243960   51953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:47:56.248281   51953 command_runner.go:130] > 169G
	I1210 05:47:56.248748   51953 fix.go:56] duration metric: took 1.244804723s for fixHost
	I1210 05:47:56.248771   51953 start.go:83] releasing machines lock for "functional-644034", held for 1.24485909s
	I1210 05:47:56.248837   51953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:47:56.266070   51953 ssh_runner.go:195] Run: cat /version.json
	I1210 05:47:56.266123   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.266146   51953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:47:56.266199   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.283872   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.284272   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.472387   51953 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 05:47:56.475023   51953 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 05:47:56.475222   51953 ssh_runner.go:195] Run: systemctl --version
	I1210 05:47:56.481051   51953 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 05:47:56.481144   51953 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 05:47:56.481557   51953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 05:47:56.485740   51953 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 05:47:56.485802   51953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:47:56.485889   51953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:47:56.493391   51953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:47:56.493413   51953 start.go:496] detecting cgroup driver to use...
	I1210 05:47:56.493443   51953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:47:56.493499   51953 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 05:47:56.508720   51953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:47:56.521711   51953 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:47:56.521777   51953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:47:56.537527   51953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:47:56.551315   51953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:47:56.656595   51953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:47:56.765354   51953 docker.go:234] disabling docker service ...
	I1210 05:47:56.765422   51953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:47:56.780352   51953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:47:56.793570   51953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:47:56.900961   51953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:47:57.025824   51953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:47:57.039104   51953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:47:57.052658   51953 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 05:47:57.053978   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.213891   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:47:57.223164   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 05:47:57.232001   51953 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:47:57.232070   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:47:57.240776   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:47:57.249302   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:47:57.258094   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:47:57.266381   51953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:47:57.274230   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:47:57.282766   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:47:57.291675   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:47:57.300542   51953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:47:57.307150   51953 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 05:47:57.308059   51953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:47:57.315237   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:57.433904   51953 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:47:57.552794   51953 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 05:47:57.552901   51953 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 05:47:57.556769   51953 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1210 05:47:57.556839   51953 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 05:47:57.556861   51953 command_runner.go:130] > Device: 0,73	Inode: 1614        Links: 1
	I1210 05:47:57.556893   51953 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:47:57.556921   51953 command_runner.go:130] > Access: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.556947   51953 command_runner.go:130] > Modify: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.556968   51953 command_runner.go:130] > Change: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.557011   51953 command_runner.go:130] >  Birth: -
	I1210 05:47:57.557078   51953 start.go:564] Will wait 60s for crictl version
	I1210 05:47:57.557155   51953 ssh_runner.go:195] Run: which crictl
	I1210 05:47:57.560538   51953 command_runner.go:130] > /usr/local/bin/crictl
	I1210 05:47:57.560706   51953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:47:57.582482   51953 command_runner.go:130] > Version:  0.1.0
	I1210 05:47:57.582585   51953 command_runner.go:130] > RuntimeName:  containerd
	I1210 05:47:57.582609   51953 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1210 05:47:57.582715   51953 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 05:47:57.584523   51953 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 05:47:57.584650   51953 ssh_runner.go:195] Run: containerd --version
	I1210 05:47:57.601892   51953 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 05:47:57.603507   51953 ssh_runner.go:195] Run: containerd --version
	I1210 05:47:57.622429   51953 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 05:47:57.630007   51953 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 05:47:57.632949   51953 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:47:57.648626   51953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:47:57.652604   51953 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 05:47:57.652711   51953 kubeadm.go:884] updating cluster {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:47:57.652889   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.820648   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.971830   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:58.124406   51953 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:47:58.124495   51953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:47:58.146688   51953 command_runner.go:130] > {
	I1210 05:47:58.146710   51953 command_runner.go:130] >   "images":  [
	I1210 05:47:58.146724   51953 command_runner.go:130] >     {
	I1210 05:47:58.146735   51953 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1210 05:47:58.146741   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146747   51953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 05:47:58.146750   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146755   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146765   51953 command_runner.go:130] >       "size":  "8032639",
	I1210 05:47:58.146779   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146784   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146790   51953 command_runner.go:130] >     },
	I1210 05:47:58.146794   51953 command_runner.go:130] >     {
	I1210 05:47:58.146801   51953 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 05:47:58.146808   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146813   51953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 05:47:58.146817   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146821   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146830   51953 command_runner.go:130] >       "size":  "21166088",
	I1210 05:47:58.146837   51953 command_runner.go:130] >       "username":  "nonroot",
	I1210 05:47:58.146841   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146844   51953 command_runner.go:130] >     },
	I1210 05:47:58.146847   51953 command_runner.go:130] >     {
	I1210 05:47:58.146855   51953 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1210 05:47:58.146861   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146867   51953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1210 05:47:58.146873   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146878   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146885   51953 command_runner.go:130] >       "size":  "21748497",
	I1210 05:47:58.146888   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.146897   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.146904   51953 command_runner.go:130] >       },
	I1210 05:47:58.146908   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146912   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146917   51953 command_runner.go:130] >     },
	I1210 05:47:58.146925   51953 command_runner.go:130] >     {
	I1210 05:47:58.146933   51953 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1210 05:47:58.146939   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146948   51953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1210 05:47:58.146955   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146959   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146964   51953 command_runner.go:130] >       "size":  "24690149",
	I1210 05:47:58.146967   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.146972   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.146975   51953 command_runner.go:130] >       },
	I1210 05:47:58.146979   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146985   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146990   51953 command_runner.go:130] >     },
	I1210 05:47:58.146996   51953 command_runner.go:130] >     {
	I1210 05:47:58.147003   51953 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1210 05:47:58.147007   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147030   51953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1210 05:47:58.147034   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147038   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147042   51953 command_runner.go:130] >       "size":  "20670083",
	I1210 05:47:58.147046   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147050   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.147056   51953 command_runner.go:130] >       },
	I1210 05:47:58.147060   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147067   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147070   51953 command_runner.go:130] >     },
	I1210 05:47:58.147081   51953 command_runner.go:130] >     {
	I1210 05:47:58.147088   51953 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1210 05:47:58.147092   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147099   51953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1210 05:47:58.147103   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147107   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147111   51953 command_runner.go:130] >       "size":  "22430795",
	I1210 05:47:58.147122   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147127   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147132   51953 command_runner.go:130] >     },
	I1210 05:47:58.147135   51953 command_runner.go:130] >     {
	I1210 05:47:58.147144   51953 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1210 05:47:58.147150   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147155   51953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1210 05:47:58.147161   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147173   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147180   51953 command_runner.go:130] >       "size":  "15403461",
	I1210 05:47:58.147183   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147187   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.147190   51953 command_runner.go:130] >       },
	I1210 05:47:58.147194   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147198   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147205   51953 command_runner.go:130] >     },
	I1210 05:47:58.147208   51953 command_runner.go:130] >     {
	I1210 05:47:58.147215   51953 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 05:47:58.147221   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147226   51953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 05:47:58.147232   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147236   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147248   51953 command_runner.go:130] >       "size":  "265458",
	I1210 05:47:58.147252   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147256   51953 command_runner.go:130] >         "value":  "65535"
	I1210 05:47:58.147259   51953 command_runner.go:130] >       },
	I1210 05:47:58.147270   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147274   51953 command_runner.go:130] >       "pinned":  true
	I1210 05:47:58.147277   51953 command_runner.go:130] >     }
	I1210 05:47:58.147282   51953 command_runner.go:130] >   ]
	I1210 05:47:58.147284   51953 command_runner.go:130] > }
	I1210 05:47:58.149521   51953 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 05:47:58.149540   51953 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:47:58.149552   51953 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1210 05:47:58.149645   51953 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:47:58.149706   51953 ssh_runner.go:195] Run: sudo crictl info
	I1210 05:47:58.176587   51953 command_runner.go:130] > {
	I1210 05:47:58.176610   51953 command_runner.go:130] >   "cniconfig": {
	I1210 05:47:58.176616   51953 command_runner.go:130] >     "Networks": [
	I1210 05:47:58.176620   51953 command_runner.go:130] >       {
	I1210 05:47:58.176624   51953 command_runner.go:130] >         "Config": {
	I1210 05:47:58.176629   51953 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1210 05:47:58.176644   51953 command_runner.go:130] >           "Name": "cni-loopback",
	I1210 05:47:58.176648   51953 command_runner.go:130] >           "Plugins": [
	I1210 05:47:58.176652   51953 command_runner.go:130] >             {
	I1210 05:47:58.176657   51953 command_runner.go:130] >               "Network": {
	I1210 05:47:58.176662   51953 command_runner.go:130] >                 "ipam": {},
	I1210 05:47:58.176673   51953 command_runner.go:130] >                 "type": "loopback"
	I1210 05:47:58.176678   51953 command_runner.go:130] >               },
	I1210 05:47:58.176687   51953 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1210 05:47:58.176691   51953 command_runner.go:130] >             }
	I1210 05:47:58.176694   51953 command_runner.go:130] >           ],
	I1210 05:47:58.176704   51953 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1210 05:47:58.176717   51953 command_runner.go:130] >         },
	I1210 05:47:58.176725   51953 command_runner.go:130] >         "IFName": "lo"
	I1210 05:47:58.176728   51953 command_runner.go:130] >       }
	I1210 05:47:58.176732   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176736   51953 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1210 05:47:58.176742   51953 command_runner.go:130] >     "PluginDirs": [
	I1210 05:47:58.176746   51953 command_runner.go:130] >       "/opt/cni/bin"
	I1210 05:47:58.176752   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176756   51953 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1210 05:47:58.176771   51953 command_runner.go:130] >     "Prefix": "eth"
	I1210 05:47:58.176775   51953 command_runner.go:130] >   },
	I1210 05:47:58.176782   51953 command_runner.go:130] >   "config": {
	I1210 05:47:58.176789   51953 command_runner.go:130] >     "cdiSpecDirs": [
	I1210 05:47:58.176793   51953 command_runner.go:130] >       "/etc/cdi",
	I1210 05:47:58.176797   51953 command_runner.go:130] >       "/var/run/cdi"
	I1210 05:47:58.176803   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176807   51953 command_runner.go:130] >     "cni": {
	I1210 05:47:58.176813   51953 command_runner.go:130] >       "binDir": "",
	I1210 05:47:58.176817   51953 command_runner.go:130] >       "binDirs": [
	I1210 05:47:58.176821   51953 command_runner.go:130] >         "/opt/cni/bin"
	I1210 05:47:58.176825   51953 command_runner.go:130] >       ],
	I1210 05:47:58.176836   51953 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1210 05:47:58.176840   51953 command_runner.go:130] >       "confTemplate": "",
	I1210 05:47:58.176844   51953 command_runner.go:130] >       "ipPref": "",
	I1210 05:47:58.176850   51953 command_runner.go:130] >       "maxConfNum": 1,
	I1210 05:47:58.176854   51953 command_runner.go:130] >       "setupSerially": false,
	I1210 05:47:58.176861   51953 command_runner.go:130] >       "useInternalLoopback": false
	I1210 05:47:58.176864   51953 command_runner.go:130] >     },
	I1210 05:47:58.176874   51953 command_runner.go:130] >     "containerd": {
	I1210 05:47:58.176880   51953 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1210 05:47:58.176886   51953 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1210 05:47:58.176892   51953 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1210 05:47:58.176901   51953 command_runner.go:130] >       "runtimes": {
	I1210 05:47:58.176905   51953 command_runner.go:130] >         "runc": {
	I1210 05:47:58.176909   51953 command_runner.go:130] >           "ContainerAnnotations": null,
	I1210 05:47:58.176915   51953 command_runner.go:130] >           "PodAnnotations": null,
	I1210 05:47:58.176920   51953 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1210 05:47:58.176926   51953 command_runner.go:130] >           "cgroupWritable": false,
	I1210 05:47:58.176930   51953 command_runner.go:130] >           "cniConfDir": "",
	I1210 05:47:58.176934   51953 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1210 05:47:58.176939   51953 command_runner.go:130] >           "io_type": "",
	I1210 05:47:58.176943   51953 command_runner.go:130] >           "options": {
	I1210 05:47:58.176950   51953 command_runner.go:130] >             "BinaryName": "",
	I1210 05:47:58.176955   51953 command_runner.go:130] >             "CriuImagePath": "",
	I1210 05:47:58.176970   51953 command_runner.go:130] >             "CriuWorkPath": "",
	I1210 05:47:58.176977   51953 command_runner.go:130] >             "IoGid": 0,
	I1210 05:47:58.176981   51953 command_runner.go:130] >             "IoUid": 0,
	I1210 05:47:58.176985   51953 command_runner.go:130] >             "NoNewKeyring": false,
	I1210 05:47:58.176991   51953 command_runner.go:130] >             "Root": "",
	I1210 05:47:58.176995   51953 command_runner.go:130] >             "ShimCgroup": "",
	I1210 05:47:58.177002   51953 command_runner.go:130] >             "SystemdCgroup": false
	I1210 05:47:58.177005   51953 command_runner.go:130] >           },
	I1210 05:47:58.177011   51953 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1210 05:47:58.177019   51953 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1210 05:47:58.177023   51953 command_runner.go:130] >           "runtimePath": "",
	I1210 05:47:58.177030   51953 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1210 05:47:58.177035   51953 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1210 05:47:58.177041   51953 command_runner.go:130] >           "snapshotter": ""
	I1210 05:47:58.177044   51953 command_runner.go:130] >         }
	I1210 05:47:58.177049   51953 command_runner.go:130] >       }
	I1210 05:47:58.177052   51953 command_runner.go:130] >     },
	I1210 05:47:58.177065   51953 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1210 05:47:58.177073   51953 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1210 05:47:58.177078   51953 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1210 05:47:58.177083   51953 command_runner.go:130] >     "disableApparmor": false,
	I1210 05:47:58.177090   51953 command_runner.go:130] >     "disableHugetlbController": true,
	I1210 05:47:58.177094   51953 command_runner.go:130] >     "disableProcMount": false,
	I1210 05:47:58.177098   51953 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1210 05:47:58.177102   51953 command_runner.go:130] >     "enableCDI": true,
	I1210 05:47:58.177106   51953 command_runner.go:130] >     "enableSelinux": false,
	I1210 05:47:58.177114   51953 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1210 05:47:58.177118   51953 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1210 05:47:58.177125   51953 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1210 05:47:58.177130   51953 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1210 05:47:58.177138   51953 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1210 05:47:58.177142   51953 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1210 05:47:58.177147   51953 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1210 05:47:58.177160   51953 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1210 05:47:58.177170   51953 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1210 05:47:58.177176   51953 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1210 05:47:58.177186   51953 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1210 05:47:58.177190   51953 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1210 05:47:58.177193   51953 command_runner.go:130] >   },
	I1210 05:47:58.177197   51953 command_runner.go:130] >   "features": {
	I1210 05:47:58.177201   51953 command_runner.go:130] >     "supplemental_groups_policy": true
	I1210 05:47:58.177204   51953 command_runner.go:130] >   },
	I1210 05:47:58.177209   51953 command_runner.go:130] >   "golang": "go1.24.9",
	I1210 05:47:58.177221   51953 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 05:47:58.177233   51953 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 05:47:58.177237   51953 command_runner.go:130] >   "runtimeHandlers": [
	I1210 05:47:58.177246   51953 command_runner.go:130] >     {
	I1210 05:47:58.177250   51953 command_runner.go:130] >       "features": {
	I1210 05:47:58.177255   51953 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 05:47:58.177259   51953 command_runner.go:130] >         "user_namespaces": true
	I1210 05:47:58.177261   51953 command_runner.go:130] >       }
	I1210 05:47:58.177264   51953 command_runner.go:130] >     },
	I1210 05:47:58.177267   51953 command_runner.go:130] >     {
	I1210 05:47:58.177271   51953 command_runner.go:130] >       "features": {
	I1210 05:47:58.177275   51953 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 05:47:58.177279   51953 command_runner.go:130] >         "user_namespaces": true
	I1210 05:47:58.177282   51953 command_runner.go:130] >       },
	I1210 05:47:58.177287   51953 command_runner.go:130] >       "name": "runc"
	I1210 05:47:58.177289   51953 command_runner.go:130] >     }
	I1210 05:47:58.177293   51953 command_runner.go:130] >   ],
	I1210 05:47:58.177296   51953 command_runner.go:130] >   "status": {
	I1210 05:47:58.177300   51953 command_runner.go:130] >     "conditions": [
	I1210 05:47:58.177303   51953 command_runner.go:130] >       {
	I1210 05:47:58.177307   51953 command_runner.go:130] >         "message": "",
	I1210 05:47:58.177314   51953 command_runner.go:130] >         "reason": "",
	I1210 05:47:58.177318   51953 command_runner.go:130] >         "status": true,
	I1210 05:47:58.177329   51953 command_runner.go:130] >         "type": "RuntimeReady"
	I1210 05:47:58.177335   51953 command_runner.go:130] >       },
	I1210 05:47:58.177339   51953 command_runner.go:130] >       {
	I1210 05:47:58.177345   51953 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1210 05:47:58.177356   51953 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1210 05:47:58.177360   51953 command_runner.go:130] >         "status": false,
	I1210 05:47:58.177365   51953 command_runner.go:130] >         "type": "NetworkReady"
	I1210 05:47:58.177373   51953 command_runner.go:130] >       },
	I1210 05:47:58.177376   51953 command_runner.go:130] >       {
	I1210 05:47:58.177397   51953 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1210 05:47:58.177406   51953 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1210 05:47:58.177414   51953 command_runner.go:130] >         "status": false,
	I1210 05:47:58.177420   51953 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1210 05:47:58.177425   51953 command_runner.go:130] >       }
	I1210 05:47:58.177428   51953 command_runner.go:130] >     ]
	I1210 05:47:58.177431   51953 command_runner.go:130] >   }
	I1210 05:47:58.177434   51953 command_runner.go:130] > }
	I1210 05:47:58.177746   51953 cni.go:84] Creating CNI manager for ""
	I1210 05:47:58.177762   51953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:47:58.177786   51953 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:47:58.177809   51953 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644034 NodeName:functional-644034 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:47:58.177931   51953 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-644034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:47:58.178005   51953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:47:58.184894   51953 command_runner.go:130] > kubeadm
	I1210 05:47:58.184912   51953 command_runner.go:130] > kubectl
	I1210 05:47:58.184916   51953 command_runner.go:130] > kubelet
	I1210 05:47:58.185786   51953 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:47:58.185866   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:47:58.193140   51953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 05:47:58.205426   51953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:47:58.217773   51953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 05:47:58.230424   51953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:47:58.234124   51953 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 05:47:58.234224   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:58.348721   51953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:47:58.367663   51953 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034 for IP: 192.168.49.2
	I1210 05:47:58.367683   51953 certs.go:195] generating shared ca certs ...
	I1210 05:47:58.367699   51953 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:58.367828   51953 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 05:47:58.367870   51953 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 05:47:58.367878   51953 certs.go:257] generating profile certs ...
	I1210 05:47:58.367976   51953 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key
	I1210 05:47:58.368034   51953 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c
	I1210 05:47:58.368079   51953 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key
	I1210 05:47:58.368088   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 05:47:58.368100   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 05:47:58.368115   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 05:47:58.368126   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 05:47:58.368137   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 05:47:58.368148   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 05:47:58.368163   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 05:47:58.368174   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 05:47:58.368220   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 05:47:58.368248   51953 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 05:47:58.368256   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:47:58.368286   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:47:58.368309   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:47:58.368331   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 05:47:58.368373   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:47:58.368402   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.368414   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.368427   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.368978   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:47:58.388893   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:47:58.409416   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:47:58.428450   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:47:58.446489   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:47:58.465644   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:47:58.483264   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:47:58.500807   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:47:58.518107   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 05:47:58.536070   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 05:47:58.553632   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:47:58.571692   51953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:47:58.584898   51953 ssh_runner.go:195] Run: openssl version
	I1210 05:47:58.590608   51953 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 05:47:58.591139   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.599076   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 05:47:58.606632   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610200   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610255   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610308   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.650574   51953 command_runner.go:130] > 51391683
	I1210 05:47:58.651004   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:47:58.658249   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.665388   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 05:47:58.672651   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676295   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676329   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676381   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.716661   51953 command_runner.go:130] > 3ec20f2e
	I1210 05:47:58.717156   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:47:58.724496   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.731755   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:47:58.739224   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742739   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742773   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742827   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.783109   51953 command_runner.go:130] > b5213941
	I1210 05:47:58.783531   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:47:58.790793   51953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:47:58.794232   51953 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:47:58.794258   51953 command_runner.go:130] >   Size: 1172      	Blocks: 8          IO Block: 4096   regular file
	I1210 05:47:58.794265   51953 command_runner.go:130] > Device: 259,1	Inode: 1307887     Links: 1
	I1210 05:47:58.794272   51953 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:47:58.794286   51953 command_runner.go:130] > Access: 2025-12-10 05:43:51.022657545 +0000
	I1210 05:47:58.794292   51953 command_runner.go:130] > Modify: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794297   51953 command_runner.go:130] > Change: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794305   51953 command_runner.go:130] >  Birth: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794558   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:47:58.837377   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.837465   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:47:58.877636   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.878121   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:47:58.918797   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.919235   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:47:58.959487   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.960010   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:47:59.003251   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:59.003763   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:47:59.044279   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:59.044747   51953 kubeadm.go:401] StartCluster: {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:59.044823   51953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 05:47:59.044887   51953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:47:59.069970   51953 cri.go:89] found id: ""
	I1210 05:47:59.070038   51953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:47:59.076652   51953 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 05:47:59.076673   51953 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 05:47:59.076679   51953 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 05:47:59.077535   51953 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:47:59.077555   51953 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:47:59.077617   51953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:47:59.084671   51953 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:47:59.085448   51953 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-644034" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.085850   51953 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "functional-644034" cluster setting kubeconfig missing "functional-644034" context setting]
	I1210 05:47:59.086310   51953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.087190   51953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.087371   51953 kapi.go:59] client config for functional-644034: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key", CAFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:47:59.088034   51953 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 05:47:59.088055   51953 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 05:47:59.088068   51953 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 05:47:59.088074   51953 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 05:47:59.088078   51953 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 05:47:59.088429   51953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:47:59.089407   51953 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:47:59.096980   51953 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 05:47:59.097014   51953 kubeadm.go:602] duration metric: took 19.453757ms to restartPrimaryControlPlane
	I1210 05:47:59.097024   51953 kubeadm.go:403] duration metric: took 52.281886ms to StartCluster
	I1210 05:47:59.097064   51953 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.097152   51953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.097734   51953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.097941   51953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 05:47:59.098267   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:59.098318   51953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 05:47:59.098380   51953 addons.go:70] Setting storage-provisioner=true in profile "functional-644034"
	I1210 05:47:59.098393   51953 addons.go:239] Setting addon storage-provisioner=true in "functional-644034"
	I1210 05:47:59.098419   51953 host.go:66] Checking if "functional-644034" exists ...
	I1210 05:47:59.098907   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.101905   51953 out.go:179] * Verifying Kubernetes components...
	I1210 05:47:59.106662   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:59.109785   51953 addons.go:70] Setting default-storageclass=true in profile "functional-644034"
	I1210 05:47:59.109823   51953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-644034"
	I1210 05:47:59.110155   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.137186   51953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:47:59.140065   51953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:47:59.140094   51953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:47:59.140172   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:59.152137   51953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.152308   51953 kapi.go:59] client config for functional-644034: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key", CAFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:47:59.152605   51953 addons.go:239] Setting addon default-storageclass=true in "functional-644034"
	I1210 05:47:59.152636   51953 host.go:66] Checking if "functional-644034" exists ...
	I1210 05:47:59.153047   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.173160   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:59.202277   51953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:47:59.202307   51953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:47:59.202368   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:59.232670   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:59.321380   51953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:47:59.337472   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:47:59.374986   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:00.169551   51953 node_ready.go:35] waiting up to 6m0s for node "functional-644034" to be "Ready" ...
	I1210 05:48:00.169689   51953 type.go:168] "Request Body" body=""
	I1210 05:48:00.169752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:00.170008   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.170051   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170077   51953 retry.go:31] will retry after 139.03743ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170121   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.170135   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170145   51953 retry.go:31] will retry after 348.331986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:00.310507   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:00.415931   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.416069   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.416135   51953 retry.go:31] will retry after 233.204425ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.519312   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:00.585157   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.585240   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.585274   51953 retry.go:31] will retry after 499.606359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.650447   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:00.669993   51953 type.go:168] "Request Body" body=""
	I1210 05:48:00.670098   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:00.670433   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:00.712181   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.715417   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.715449   51953 retry.go:31] will retry after 781.025556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.086035   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:01.148055   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.148095   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.148115   51953 retry.go:31] will retry after 644.355236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.170281   51953 type.go:168] "Request Body" body=""
	I1210 05:48:01.170372   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:01.170734   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:01.497246   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:01.552133   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.555247   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.555278   51953 retry.go:31] will retry after 1.200680207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.670555   51953 type.go:168] "Request Body" body=""
	I1210 05:48:01.670646   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:01.670959   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:01.793341   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:01.851452   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.854727   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.854768   51953 retry.go:31] will retry after 727.381606ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.170188   51953 type.go:168] "Request Body" body=""
	I1210 05:48:02.170290   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:02.170618   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:02.170696   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:02.583237   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:02.649935   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:02.649981   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.650022   51953 retry.go:31] will retry after 1.310515996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.670155   51953 type.go:168] "Request Body" body=""
	I1210 05:48:02.670292   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:02.670651   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:02.757075   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:02.818837   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:02.821796   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.821831   51953 retry.go:31] will retry after 1.687874073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:03.170317   51953 type.go:168] "Request Body" body=""
	I1210 05:48:03.170406   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:03.170707   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:03.670505   51953 type.go:168] "Request Body" body=""
	I1210 05:48:03.670583   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:03.670925   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:03.961404   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:04.024244   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:04.024282   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.024323   51953 retry.go:31] will retry after 1.628415395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.170524   51953 type.go:168] "Request Body" body=""
	I1210 05:48:04.170651   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:04.171067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:04.171129   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:04.510724   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:04.566617   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:04.570030   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.570064   51953 retry.go:31] will retry after 2.695563296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.670310   51953 type.go:168] "Request Body" body=""
	I1210 05:48:04.670389   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:04.670711   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.170563   51953 type.go:168] "Request Body" body=""
	I1210 05:48:05.170635   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:05.170967   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.653658   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:05.670351   51953 type.go:168] "Request Body" body=""
	I1210 05:48:05.670461   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:05.670799   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.744168   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:05.744207   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:05.744248   51953 retry.go:31] will retry after 1.470532715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:06.169848   51953 type.go:168] "Request Body" body=""
	I1210 05:48:06.169975   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:06.170317   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:06.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:48:06.669921   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:06.670264   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:06.670329   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:07.169978   51953 type.go:168] "Request Body" body=""
	I1210 05:48:07.170058   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:07.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:07.215626   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:07.266052   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:07.280336   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:07.280370   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.280387   51953 retry.go:31] will retry after 5.58106306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.333195   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:07.333236   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.333256   51953 retry.go:31] will retry after 2.610344026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.670753   51953 type.go:168] "Request Body" body=""
	I1210 05:48:07.670832   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:07.671195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:08.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:48:08.169912   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:08.170281   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:08.669773   51953 type.go:168] "Request Body" body=""
	I1210 05:48:08.669870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:08.670131   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:09.170132   51953 type.go:168] "Request Body" body=""
	I1210 05:48:09.170205   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:09.170536   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:09.170594   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:09.670237   51953 type.go:168] "Request Body" body=""
	I1210 05:48:09.670311   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:09.670667   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:09.944159   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:10.010561   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:10.010619   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:10.010642   51953 retry.go:31] will retry after 2.5620788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:10.169787   51953 type.go:168] "Request Body" body=""
	I1210 05:48:10.169854   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:10.170167   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:10.669895   51953 type.go:168] "Request Body" body=""
	I1210 05:48:10.669974   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:10.670363   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:11.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:48:11.169913   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:11.170234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:11.669750   51953 type.go:168] "Request Body" body=""
	I1210 05:48:11.669833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:11.670159   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:11.670233   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:12.169956   51953 type.go:168] "Request Body" body=""
	I1210 05:48:12.170030   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:12.170375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:12.572886   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:12.631295   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:12.634400   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.634432   51953 retry.go:31] will retry after 5.90622422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.670736   51953 type.go:168] "Request Body" body=""
	I1210 05:48:12.670808   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:12.671172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:12.862533   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:12.918893   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:12.918929   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.918949   51953 retry.go:31] will retry after 8.272023324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:13.170464   51953 type.go:168] "Request Body" body=""
	I1210 05:48:13.170532   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:13.170809   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:13.670589   51953 type.go:168] "Request Body" body=""
	I1210 05:48:13.670665   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:13.670979   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:13.671051   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:14.170623   51953 type.go:168] "Request Body" body=""
	I1210 05:48:14.170704   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:14.171052   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:14.669975   51953 type.go:168] "Request Body" body=""
	I1210 05:48:14.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:14.670351   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:15.170046   51953 type.go:168] "Request Body" body=""
	I1210 05:48:15.170119   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:15.170417   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:15.670099   51953 type.go:168] "Request Body" body=""
	I1210 05:48:15.670181   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:15.670515   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:16.169772   51953 type.go:168] "Request Body" body=""
	I1210 05:48:16.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:16.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:16.170210   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:16.669859   51953 type.go:168] "Request Body" body=""
	I1210 05:48:16.669945   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:16.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:17.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:48:17.169889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:17.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:17.669877   51953 type.go:168] "Request Body" body=""
	I1210 05:48:17.669969   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:17.670225   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:18.169971   51953 type.go:168] "Request Body" body=""
	I1210 05:48:18.170045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:18.170383   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:18.170445   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:18.540818   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:18.598871   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:18.601811   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:18.601841   51953 retry.go:31] will retry after 12.747843498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:18.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:18.670272   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:18.670582   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:19.170370   51953 type.go:168] "Request Body" body=""
	I1210 05:48:19.170452   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:19.170779   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:19.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:48:19.670779   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:19.671114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:20.169841   51953 type.go:168] "Request Body" body=""
	I1210 05:48:20.169920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:20.170286   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:20.669765   51953 type.go:168] "Request Body" body=""
	I1210 05:48:20.669841   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:20.670151   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:20.670209   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:21.169914   51953 type.go:168] "Request Body" body=""
	I1210 05:48:21.169987   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:21.170308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:21.191680   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:21.254244   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:21.254291   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:21.254309   51953 retry.go:31] will retry after 13.504528238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:21.669784   51953 type.go:168] "Request Body" body=""
	I1210 05:48:21.669884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:21.670234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:22.169864   51953 type.go:168] "Request Body" body=""
	I1210 05:48:22.169979   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:22.170274   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:22.670052   51953 type.go:168] "Request Body" body=""
	I1210 05:48:22.670132   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:22.670457   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:22.670511   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:23.170156   51953 type.go:168] "Request Body" body=""
	I1210 05:48:23.170275   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:23.170563   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:23.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:48:23.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:23.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:24.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:48:24.169911   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:24.170218   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:24.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:24.670237   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:24.670543   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:24.670597   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:25.170342   51953 type.go:168] "Request Body" body=""
	I1210 05:48:25.170412   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:25.170680   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:25.670543   51953 type.go:168] "Request Body" body=""
	I1210 05:48:25.670623   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:25.670919   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:26.170671   51953 type.go:168] "Request Body" body=""
	I1210 05:48:26.170749   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:26.171110   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:26.669682   51953 type.go:168] "Request Body" body=""
	I1210 05:48:26.669752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:26.670007   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:27.170402   51953 type.go:168] "Request Body" body=""
	I1210 05:48:27.170479   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:27.170798   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:27.170859   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:27.670357   51953 type.go:168] "Request Body" body=""
	I1210 05:48:27.670437   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:27.670773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:28.170551   51953 type.go:168] "Request Body" body=""
	I1210 05:48:28.170643   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:28.170896   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:28.670265   51953 type.go:168] "Request Body" body=""
	I1210 05:48:28.670338   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:28.670758   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:29.170472   51953 type.go:168] "Request Body" body=""
	I1210 05:48:29.170542   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:29.170877   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:29.170933   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:29.669736   51953 type.go:168] "Request Body" body=""
	I1210 05:48:29.669810   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:29.670135   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:30.169864   51953 type.go:168] "Request Body" body=""
	I1210 05:48:30.169940   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:30.170305   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:30.669879   51953 type.go:168] "Request Body" body=""
	I1210 05:48:30.669957   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:30.670279   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:31.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:48:31.169834   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:31.170092   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:31.350447   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:31.407735   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:31.410898   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:31.410931   51953 retry.go:31] will retry after 18.518112559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:31.670455   51953 type.go:168] "Request Body" body=""
	I1210 05:48:31.670542   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:31.670952   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:31.671005   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:32.170764   51953 type.go:168] "Request Body" body=""
	I1210 05:48:32.170837   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:32.171167   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:32.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:48:32.669900   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:32.670158   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:33.169858   51953 type.go:168] "Request Body" body=""
	I1210 05:48:33.169936   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:33.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:33.669974   51953 type.go:168] "Request Body" body=""
	I1210 05:48:33.670051   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:33.670366   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:34.170663   51953 type.go:168] "Request Body" body=""
	I1210 05:48:34.170730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:34.171001   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:34.171083   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:34.670065   51953 type.go:168] "Request Body" body=""
	I1210 05:48:34.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:34.670459   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:34.759888   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:34.813991   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:34.817148   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:34.817180   51953 retry.go:31] will retry after 7.858877757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:35.170714   51953 type.go:168] "Request Body" body=""
	I1210 05:48:35.170783   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:35.171144   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:35.669794   51953 type.go:168] "Request Body" body=""
	I1210 05:48:35.669884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:35.670145   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:36.169862   51953 type.go:168] "Request Body" body=""
	I1210 05:48:36.169932   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:36.170264   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:36.669949   51953 type.go:168] "Request Body" body=""
	I1210 05:48:36.670019   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:36.670336   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:36.670392   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:37.170023   51953 type.go:168] "Request Body" body=""
	I1210 05:48:37.170089   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:37.170351   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:37.670112   51953 type.go:168] "Request Body" body=""
	I1210 05:48:37.670187   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:37.670504   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:38.170212   51953 type.go:168] "Request Body" body=""
	I1210 05:48:38.170304   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:38.170601   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:38.670326   51953 type.go:168] "Request Body" body=""
	I1210 05:48:38.670390   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:38.670677   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:38.670718   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:39.170413   51953 type.go:168] "Request Body" body=""
	I1210 05:48:39.170487   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:39.170808   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:39.669722   51953 type.go:168] "Request Body" body=""
	I1210 05:48:39.669794   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:39.670121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:40.169742   51953 type.go:168] "Request Body" body=""
	I1210 05:48:40.169816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:40.170090   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:40.669786   51953 type.go:168] "Request Body" body=""
	I1210 05:48:40.669863   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:40.670230   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:41.169931   51953 type.go:168] "Request Body" body=""
	I1210 05:48:41.170003   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:41.170334   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:41.170388   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:41.670036   51953 type.go:168] "Request Body" body=""
	I1210 05:48:41.670109   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:41.670415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.170132   51953 type.go:168] "Request Body" body=""
	I1210 05:48:42.170213   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:42.170632   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.670451   51953 type.go:168] "Request Body" body=""
	I1210 05:48:42.670533   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:42.670872   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.677131   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:42.736218   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:42.736261   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:42.736279   51953 retry.go:31] will retry after 23.425189001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:43.170386   51953 type.go:168] "Request Body" body=""
	I1210 05:48:43.170464   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:43.170737   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:43.170779   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:43.670538   51953 type.go:168] "Request Body" body=""
	I1210 05:48:43.670609   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:43.670906   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:44.170640   51953 type.go:168] "Request Body" body=""
	I1210 05:48:44.170719   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:44.171057   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:44.669915   51953 type.go:168] "Request Body" body=""
	I1210 05:48:44.669983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:44.670265   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:45.170036   51953 type.go:168] "Request Body" body=""
	I1210 05:48:45.170123   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:45.175201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1210 05:48:45.175287   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:45.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:48:45.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:45.670195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:46.170498   51953 type.go:168] "Request Body" body=""
	I1210 05:48:46.170576   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:46.170876   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:46.670607   51953 type.go:168] "Request Body" body=""
	I1210 05:48:46.670701   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:46.671031   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:47.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:48:47.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:47.170154   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:47.669733   51953 type.go:168] "Request Body" body=""
	I1210 05:48:47.669806   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:47.670071   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:47.670117   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:48.169793   51953 type.go:168] "Request Body" body=""
	I1210 05:48:48.169879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:48.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:48.669835   51953 type.go:168] "Request Body" body=""
	I1210 05:48:48.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:48.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:49.170055   51953 type.go:168] "Request Body" body=""
	I1210 05:48:49.170124   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:49.170378   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:49.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:49.670235   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:49.670525   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:49.670571   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:49.930022   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:49.989791   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:49.993079   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:49.993114   51953 retry.go:31] will retry after 23.38662002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:50.170615   51953 type.go:168] "Request Body" body=""
	I1210 05:48:50.170692   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:50.171002   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:50.669688   51953 type.go:168] "Request Body" body=""
	I1210 05:48:50.669757   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:50.670060   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:51.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:48:51.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:51.170229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:51.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:48:51.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:51.670261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:52.169845   51953 type.go:168] "Request Body" body=""
	I1210 05:48:52.169924   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:52.170187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:52.170237   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:52.669804   51953 type.go:168] "Request Body" body=""
	I1210 05:48:52.669873   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:52.670187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:53.169870   51953 type.go:168] "Request Body" body=""
	I1210 05:48:53.169941   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:53.170273   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:53.669765   51953 type.go:168] "Request Body" body=""
	I1210 05:48:53.669863   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:53.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:54.169803   51953 type.go:168] "Request Body" body=""
	I1210 05:48:54.169877   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:54.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:54.170270   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:54.670065   51953 type.go:168] "Request Body" body=""
	I1210 05:48:54.670136   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:54.670470   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:55.169793   51953 type.go:168] "Request Body" body=""
	I1210 05:48:55.169876   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:55.170142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:55.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:48:55.669919   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:55.670247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:56.169832   51953 type.go:168] "Request Body" body=""
	I1210 05:48:56.169907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:56.170229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:56.669896   51953 type.go:168] "Request Body" body=""
	I1210 05:48:56.669967   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:56.670287   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:56.670338   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:57.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:48:57.169898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:57.170223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:57.669803   51953 type.go:168] "Request Body" body=""
	I1210 05:48:57.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:57.670238   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:58.169908   51953 type.go:168] "Request Body" body=""
	I1210 05:48:58.169985   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:58.170322   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:58.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:48:58.670098   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:58.670445   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:58.670497   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:59.170301   51953 type.go:168] "Request Body" body=""
	I1210 05:48:59.170378   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:59.170749   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:59.670557   51953 type.go:168] "Request Body" body=""
	I1210 05:48:59.670633   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:59.670949   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:00.169725   51953 type.go:168] "Request Body" body=""
	I1210 05:49:00.169813   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:00.170141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:00.670083   51953 type.go:168] "Request Body" body=""
	I1210 05:49:00.670159   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:00.670486   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:00.670533   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:01.169951   51953 type.go:168] "Request Body" body=""
	I1210 05:49:01.170038   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:01.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:01.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:01.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:01.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:02.169846   51953 type.go:168] "Request Body" body=""
	I1210 05:49:02.169918   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:02.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:02.669747   51953 type.go:168] "Request Body" body=""
	I1210 05:49:02.669829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:02.670143   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:03.169862   51953 type.go:168] "Request Body" body=""
	I1210 05:49:03.169937   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:03.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:03.170307   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:03.669983   51953 type.go:168] "Request Body" body=""
	I1210 05:49:03.670055   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:03.670401   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:04.169978   51953 type.go:168] "Request Body" body=""
	I1210 05:49:04.170070   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:04.170429   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:04.670184   51953 type.go:168] "Request Body" body=""
	I1210 05:49:04.670254   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:04.670541   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:05.169853   51953 type.go:168] "Request Body" body=""
	I1210 05:49:05.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:05.170261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:05.669805   51953 type.go:168] "Request Body" body=""
	I1210 05:49:05.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:05.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:05.670209   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:06.161707   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:49:06.170118   51953 type.go:168] "Request Body" body=""
	I1210 05:49:06.170187   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:06.170454   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:06.215983   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:06.219418   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:06.219449   51953 retry.go:31] will retry after 38.750779649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:06.669785   51953 type.go:168] "Request Body" body=""
	I1210 05:49:06.669865   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:06.670186   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:07.169875   51953 type.go:168] "Request Body" body=""
	I1210 05:49:07.169941   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:07.170192   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:07.669935   51953 type.go:168] "Request Body" body=""
	I1210 05:49:07.670005   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:07.670350   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:07.670403   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:08.170063   51953 type.go:168] "Request Body" body=""
	I1210 05:49:08.170142   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:08.170510   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:08.670188   51953 type.go:168] "Request Body" body=""
	I1210 05:49:08.670268   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:08.670583   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:09.170358   51953 type.go:168] "Request Body" body=""
	I1210 05:49:09.170435   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:09.170718   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:09.670114   51953 type.go:168] "Request Body" body=""
	I1210 05:49:09.670186   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:09.670501   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:09.670595   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:10.170242   51953 type.go:168] "Request Body" body=""
	I1210 05:49:10.170308   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:10.170650   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:10.670454   51953 type.go:168] "Request Body" body=""
	I1210 05:49:10.670533   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:10.670873   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:11.170681   51953 type.go:168] "Request Body" body=""
	I1210 05:49:11.170756   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:11.171117   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:11.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:49:11.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:11.670185   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:12.169855   51953 type.go:168] "Request Body" body=""
	I1210 05:49:12.169925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:12.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:12.170304   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:12.669856   51953 type.go:168] "Request Body" body=""
	I1210 05:49:12.669928   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:12.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:13.169860   51953 type.go:168] "Request Body" body=""
	I1210 05:49:13.169943   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:13.170217   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:13.380712   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:49:13.443508   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:13.443549   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:13.443568   51953 retry.go:31] will retry after 17.108062036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:13.669825   51953 type.go:168] "Request Body" body=""
	I1210 05:49:13.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:13.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:14.169973   51953 type.go:168] "Request Body" body=""
	I1210 05:49:14.170046   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:14.170360   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:14.170413   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:14.670243   51953 type.go:168] "Request Body" body=""
	I1210 05:49:14.670320   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:14.670588   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:15.170418   51953 type.go:168] "Request Body" body=""
	I1210 05:49:15.170487   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:15.170795   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:15.670586   51953 type.go:168] "Request Body" body=""
	I1210 05:49:15.670658   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:15.670975   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:16.170704   51953 type.go:168] "Request Body" body=""
	I1210 05:49:16.170776   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:16.171067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:16.171120   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:16.669813   51953 type.go:168] "Request Body" body=""
	I1210 05:49:16.669905   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:16.670255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:17.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:49:17.169899   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:17.170218   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:17.669769   51953 type.go:168] "Request Body" body=""
	I1210 05:49:17.669844   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:17.670143   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:18.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:49:18.169934   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:18.170269   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:18.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:49:18.670094   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:18.670415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:18.670472   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:19.170323   51953 type.go:168] "Request Body" body=""
	I1210 05:49:19.170395   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:19.170661   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:19.670601   51953 type.go:168] "Request Body" body=""
	I1210 05:49:19.670672   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:19.670993   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:20.169740   51953 type.go:168] "Request Body" body=""
	I1210 05:49:20.169817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:20.170150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:20.670516   51953 type.go:168] "Request Body" body=""
	I1210 05:49:20.670584   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:20.670897   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:20.670954   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:21.170713   51953 type.go:168] "Request Body" body=""
	I1210 05:49:21.170783   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:21.171082   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:21.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:49:21.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:21.670172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:22.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:49:22.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:22.170106   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:22.669822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:22.669894   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:22.670229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:23.169802   51953 type.go:168] "Request Body" body=""
	I1210 05:49:23.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:23.170200   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:23.170257   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:23.669758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:23.669829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:23.670132   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:24.169822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:24.169902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:24.170262   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:24.670129   51953 type.go:168] "Request Body" body=""
	I1210 05:49:24.670207   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:24.670559   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:25.170449   51953 type.go:168] "Request Body" body=""
	I1210 05:49:25.170521   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:25.170831   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:25.170881   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:25.670585   51953 type.go:168] "Request Body" body=""
	I1210 05:49:25.670666   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:25.671038   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:26.170684   51953 type.go:168] "Request Body" body=""
	I1210 05:49:26.170760   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:26.171104   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:26.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:49:26.669864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:26.670150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:27.169852   51953 type.go:168] "Request Body" body=""
	I1210 05:49:27.169930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:27.170272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:27.669984   51953 type.go:168] "Request Body" body=""
	I1210 05:49:27.670061   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:27.670384   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:27.670440   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:28.169751   51953 type.go:168] "Request Body" body=""
	I1210 05:49:28.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:28.170155   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:28.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:49:28.669874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:28.670210   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:29.170062   51953 type.go:168] "Request Body" body=""
	I1210 05:49:29.170136   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:29.170491   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:29.670199   51953 type.go:168] "Request Body" body=""
	I1210 05:49:29.670274   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:29.670550   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:29.670593   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:30.170374   51953 type.go:168] "Request Body" body=""
	I1210 05:49:30.170446   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:30.170838   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:30.552353   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:49:30.608474   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:30.608517   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:30.608604   51953 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:49:30.670690   51953 type.go:168] "Request Body" body=""
	I1210 05:49:30.670767   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:30.671090   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:31.169783   51953 type.go:168] "Request Body" body=""
	I1210 05:49:31.169860   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:31.170226   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:31.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:31.669889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:31.670241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:32.169940   51953 type.go:168] "Request Body" body=""
	I1210 05:49:32.170013   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:32.170338   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:32.170396   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:32.670045   51953 type.go:168] "Request Body" body=""
	I1210 05:49:32.670119   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:32.670396   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:33.169834   51953 type.go:168] "Request Body" body=""
	I1210 05:49:33.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:33.170309   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:33.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:33.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:33.670201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:34.169903   51953 type.go:168] "Request Body" body=""
	I1210 05:49:34.169973   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:34.170269   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:34.670193   51953 type.go:168] "Request Body" body=""
	I1210 05:49:34.670266   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:34.670601   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:34.670655   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:35.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:49:35.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:35.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:35.669756   51953 type.go:168] "Request Body" body=""
	I1210 05:49:35.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:35.670131   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:36.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:49:36.169901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:36.170223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:36.669946   51953 type.go:168] "Request Body" body=""
	I1210 05:49:36.670020   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:36.670367   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:37.170034   51953 type.go:168] "Request Body" body=""
	I1210 05:49:37.170108   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:37.170407   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:37.170461   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:37.669822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:37.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:37.670249   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:38.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:49:38.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:38.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:38.669935   51953 type.go:168] "Request Body" body=""
	I1210 05:49:38.670003   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:38.670313   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:39.170298   51953 type.go:168] "Request Body" body=""
	I1210 05:49:39.170373   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:39.170717   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:39.170771   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:39.670468   51953 type.go:168] "Request Body" body=""
	I1210 05:49:39.670545   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:39.670883   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:40.170669   51953 type.go:168] "Request Body" body=""
	I1210 05:49:40.170737   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:40.171069   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:40.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:49:40.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:40.670211   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:41.169813   51953 type.go:168] "Request Body" body=""
	I1210 05:49:41.169884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:41.170198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:41.669764   51953 type.go:168] "Request Body" body=""
	I1210 05:49:41.669859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:41.670152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:41.670193   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:42.169845   51953 type.go:168] "Request Body" body=""
	I1210 05:49:42.169948   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:42.170319   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:42.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:49:42.669885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:42.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:43.169758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:43.169831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:43.170096   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:43.669848   51953 type.go:168] "Request Body" body=""
	I1210 05:49:43.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:43.670267   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:43.670317   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:44.169816   51953 type.go:168] "Request Body" body=""
	I1210 05:49:44.169893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:44.170187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:44.670057   51953 type.go:168] "Request Body" body=""
	I1210 05:49:44.670140   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:44.670470   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:44.970959   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:49:45.060109   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:45.064226   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:45.064337   51953 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:49:45.067552   51953 out.go:179] * Enabled addons: 
	I1210 05:49:45.070225   51953 addons.go:530] duration metric: took 1m45.971891823s for enable addons: enabled=[]
	I1210 05:49:45.169999   51953 type.go:168] "Request Body" body=""
	I1210 05:49:45.170150   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:45.171110   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:45.669844   51953 type.go:168] "Request Body" body=""
	I1210 05:49:45.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:45.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:46.169931   51953 type.go:168] "Request Body" body=""
	I1210 05:49:46.170025   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:46.170316   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:46.170369   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:46.670055   51953 type.go:168] "Request Body" body=""
	I1210 05:49:46.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:46.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:47.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:49:47.169900   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:47.170277   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:47.669758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:47.669844   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:47.670170   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:48.169861   51953 type.go:168] "Request Body" body=""
	I1210 05:49:48.169933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:48.170293   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:48.669823   51953 type.go:168] "Request Body" body=""
	I1210 05:49:48.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:48.670185   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:48.670239   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:49.170189   51953 type.go:168] "Request Body" body=""
	I1210 05:49:49.170282   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:49.170581   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:49.670519   51953 type.go:168] "Request Body" body=""
	I1210 05:49:49.670591   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:49.670933   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:50.170751   51953 type.go:168] "Request Body" body=""
	I1210 05:49:50.170838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:50.171163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:50.669768   51953 type.go:168] "Request Body" body=""
	I1210 05:49:50.669842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:50.670163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:51.169874   51953 type.go:168] "Request Body" body=""
	I1210 05:49:51.169945   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:51.170294   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:51.170350   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:51.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:49:51.669925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:51.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:52.169785   51953 type.go:168] "Request Body" body=""
	I1210 05:49:52.169868   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:52.170166   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:52.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:49:52.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:52.670278   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:53.170002   51953 type.go:168] "Request Body" body=""
	I1210 05:49:53.170083   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:53.170428   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:53.170482   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:53.670134   51953 type.go:168] "Request Body" body=""
	I1210 05:49:53.670209   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:53.670537   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:54.170330   51953 type.go:168] "Request Body" body=""
	I1210 05:49:54.170403   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:54.170997   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:54.669762   51953 type.go:168] "Request Body" body=""
	I1210 05:49:54.669840   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:54.670157   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:55.170437   51953 type.go:168] "Request Body" body=""
	I1210 05:49:55.170508   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:55.170825   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:55.170879   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:55.670656   51953 type.go:168] "Request Body" body=""
	I1210 05:49:55.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:55.671067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:56.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:49:56.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:56.170163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:56.670631   51953 type.go:168] "Request Body" body=""
	I1210 05:49:56.670708   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:56.671047   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:57.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:49:57.169826   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:57.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:57.669853   51953 type.go:168] "Request Body" body=""
	I1210 05:49:57.669923   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:57.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:57.670309   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:58.169747   51953 type.go:168] "Request Body" body=""
	I1210 05:49:58.169817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:58.170122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:58.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:49:58.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:58.670275   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:59.170063   51953 type.go:168] "Request Body" body=""
	I1210 05:49:59.170156   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:59.170502   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:59.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:49:59.670792   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:59.671123   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:59.671171   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:00.169945   51953 type.go:168] "Request Body" body=""
	I1210 05:50:00.170054   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:00.170391   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:00.670293   51953 type.go:168] "Request Body" body=""
	I1210 05:50:00.670372   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:00.670734   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:01.170379   51953 type.go:168] "Request Body" body=""
	I1210 05:50:01.170445   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:01.170785   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:01.670657   51953 type.go:168] "Request Body" body=""
	I1210 05:50:01.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:01.671101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:02.169838   51953 type.go:168] "Request Body" body=""
	I1210 05:50:02.169916   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:02.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:02.170292   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:02.670641   51953 type.go:168] "Request Body" body=""
	I1210 05:50:02.670714   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:02.671049   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:03.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:50:03.169821   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:03.170173   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:03.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:03.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:03.670257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:04.169808   51953 type.go:168] "Request Body" body=""
	I1210 05:50:04.169878   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:04.170170   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:04.670153   51953 type.go:168] "Request Body" body=""
	I1210 05:50:04.670227   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:04.670558   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:04.670612   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:05.170389   51953 type.go:168] "Request Body" body=""
	I1210 05:50:05.170463   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:05.170790   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:05.670350   51953 type.go:168] "Request Body" body=""
	I1210 05:50:05.670419   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:05.670674   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:06.170479   51953 type.go:168] "Request Body" body=""
	I1210 05:50:06.170562   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:06.170930   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:06.670726   51953 type.go:168] "Request Body" body=""
	I1210 05:50:06.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:06.671141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:06.671199   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:07.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:50:07.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:07.170225   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:07.669823   51953 type.go:168] "Request Body" body=""
	I1210 05:50:07.669897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:07.670237   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:08.169822   51953 type.go:168] "Request Body" body=""
	I1210 05:50:08.169902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:08.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:08.669915   51953 type.go:168] "Request Body" body=""
	I1210 05:50:08.669997   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:08.670259   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:09.170295   51953 type.go:168] "Request Body" body=""
	I1210 05:50:09.170366   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:09.170686   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:09.170740   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:09.670199   51953 type.go:168] "Request Body" body=""
	I1210 05:50:09.670275   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:09.670611   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:10.170386   51953 type.go:168] "Request Body" body=""
	I1210 05:50:10.170464   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:10.170732   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:10.670493   51953 type.go:168] "Request Body" body=""
	I1210 05:50:10.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:10.670908   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:11.170688   51953 type.go:168] "Request Body" body=""
	I1210 05:50:11.170762   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:11.171109   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:11.171166   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:11.669753   51953 type.go:168] "Request Body" body=""
	I1210 05:50:11.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:11.670111   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:12.169788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:12.169865   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:12.170193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:12.669828   51953 type.go:168] "Request Body" body=""
	I1210 05:50:12.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:12.670272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:13.169792   51953 type.go:168] "Request Body" body=""
	I1210 05:50:13.169860   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:13.170133   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:13.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:50:13.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:13.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:13.670257   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:14.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:50:14.169893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:14.170243   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:14.670026   51953 type.go:168] "Request Body" body=""
	I1210 05:50:14.670100   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:14.670363   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:15.170050   51953 type.go:168] "Request Body" body=""
	I1210 05:50:15.170123   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:15.170471   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:15.670177   51953 type.go:168] "Request Body" body=""
	I1210 05:50:15.670250   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:15.670584   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:15.670636   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:16.170320   51953 type.go:168] "Request Body" body=""
	I1210 05:50:16.170389   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:16.170687   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:16.670498   51953 type.go:168] "Request Body" body=""
	I1210 05:50:16.670574   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:16.670936   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:17.170736   51953 type.go:168] "Request Body" body=""
	I1210 05:50:17.170817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:17.171164   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:17.670572   51953 type.go:168] "Request Body" body=""
	I1210 05:50:17.670637   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:17.670949   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:17.671005   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:18.169725   51953 type.go:168] "Request Body" body=""
	I1210 05:50:18.169795   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:18.170134   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:18.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:50:18.669953   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:18.670308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:19.170037   51953 type.go:168] "Request Body" body=""
	I1210 05:50:19.170104   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:19.170365   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:19.670201   51953 type.go:168] "Request Body" body=""
	I1210 05:50:19.670277   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:19.670610   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:20.170409   51953 type.go:168] "Request Body" body=""
	I1210 05:50:20.170484   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:20.170822   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:20.170877   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:20.670595   51953 type.go:168] "Request Body" body=""
	I1210 05:50:20.670666   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:20.670993   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:21.169731   51953 type.go:168] "Request Body" body=""
	I1210 05:50:21.169813   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:21.170125   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:21.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:21.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:21.670193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:22.170669   51953 type.go:168] "Request Body" body=""
	I1210 05:50:22.170747   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:22.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:22.171080   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:22.669728   51953 type.go:168] "Request Body" body=""
	I1210 05:50:22.669806   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:22.670127   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:23.169851   51953 type.go:168] "Request Body" body=""
	I1210 05:50:23.169933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:23.170302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:23.669972   51953 type.go:168] "Request Body" body=""
	I1210 05:50:23.670044   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:23.670358   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:24.170057   51953 type.go:168] "Request Body" body=""
	I1210 05:50:24.170129   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:24.170453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:24.670197   51953 type.go:168] "Request Body" body=""
	I1210 05:50:24.670272   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:24.670612   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:24.670670   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:25.170337   51953 type.go:168] "Request Body" body=""
	I1210 05:50:25.170410   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:25.170739   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:25.670502   51953 type.go:168] "Request Body" body=""
	I1210 05:50:25.670572   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:25.670902   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:26.170691   51953 type.go:168] "Request Body" body=""
	I1210 05:50:26.170764   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:26.171108   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:26.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:26.669867   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:26.670130   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:27.169817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:27.169897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:27.170246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:27.170300   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:27.669967   51953 type.go:168] "Request Body" body=""
	I1210 05:50:27.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:27.670392   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:28.169739   51953 type.go:168] "Request Body" body=""
	I1210 05:50:28.169822   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:28.170150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:28.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:50:28.669930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:28.670221   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:29.170249   51953 type.go:168] "Request Body" body=""
	I1210 05:50:29.170325   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:29.170644   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:29.170699   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:29.670163   51953 type.go:168] "Request Body" body=""
	I1210 05:50:29.670232   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:29.670555   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:30.170356   51953 type.go:168] "Request Body" body=""
	I1210 05:50:30.170428   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:30.170751   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:30.670538   51953 type.go:168] "Request Body" body=""
	I1210 05:50:30.670611   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:30.670921   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:31.170427   51953 type.go:168] "Request Body" body=""
	I1210 05:50:31.170500   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:31.170752   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:31.170791   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:31.670569   51953 type.go:168] "Request Body" body=""
	I1210 05:50:31.670653   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:31.670969   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:32.169710   51953 type.go:168] "Request Body" body=""
	I1210 05:50:32.169785   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:32.170083   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:32.670743   51953 type.go:168] "Request Body" body=""
	I1210 05:50:32.670820   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:32.671121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:33.169805   51953 type.go:168] "Request Body" body=""
	I1210 05:50:33.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:33.170255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:33.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:50:33.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:33.670229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:33.670285   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:34.169778   51953 type.go:168] "Request Body" body=""
	I1210 05:50:34.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:34.170189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:34.670104   51953 type.go:168] "Request Body" body=""
	I1210 05:50:34.670184   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:34.670511   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:35.169809   51953 type.go:168] "Request Body" body=""
	I1210 05:50:35.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:35.170193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:35.670544   51953 type.go:168] "Request Body" body=""
	I1210 05:50:35.670613   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:35.670878   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:35.670919   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:36.170712   51953 type.go:168] "Request Body" body=""
	I1210 05:50:36.170793   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:36.171084   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:36.669793   51953 type.go:168] "Request Body" body=""
	I1210 05:50:36.669873   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:36.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:37.169942   51953 type.go:168] "Request Body" body=""
	I1210 05:50:37.170016   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:37.170292   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:37.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:50:37.669902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:37.670220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:38.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:50:38.169910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:38.170283   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:38.170338   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:38.669845   51953 type.go:168] "Request Body" body=""
	I1210 05:50:38.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:38.670182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:39.170144   51953 type.go:168] "Request Body" body=""
	I1210 05:50:39.170220   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:39.170549   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:39.670142   51953 type.go:168] "Request Body" body=""
	I1210 05:50:39.670218   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:39.670527   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:40.170193   51953 type.go:168] "Request Body" body=""
	I1210 05:50:40.170274   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:40.170539   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:40.170603   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:40.670363   51953 type.go:168] "Request Body" body=""
	I1210 05:50:40.670438   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:40.670794   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:41.170587   51953 type.go:168] "Request Body" body=""
	I1210 05:50:41.170671   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:41.171005   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:41.669712   51953 type.go:168] "Request Body" body=""
	I1210 05:50:41.669800   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:41.670128   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:42.169860   51953 type.go:168] "Request Body" body=""
	I1210 05:50:42.169951   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:42.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:42.669840   51953 type.go:168] "Request Body" body=""
	I1210 05:50:42.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:42.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:42.670293   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:43.169767   51953 type.go:168] "Request Body" body=""
	I1210 05:50:43.169833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:43.170101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:43.669850   51953 type.go:168] "Request Body" body=""
	I1210 05:50:43.669921   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:43.670246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:44.169977   51953 type.go:168] "Request Body" body=""
	I1210 05:50:44.170071   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:44.170414   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:44.670140   51953 type.go:168] "Request Body" body=""
	I1210 05:50:44.670226   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:44.670613   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:44.670677   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:45.170475   51953 type.go:168] "Request Body" body=""
	I1210 05:50:45.170563   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:45.170891   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:45.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:50:45.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:45.670222   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:46.169767   51953 type.go:168] "Request Body" body=""
	I1210 05:50:46.169838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:46.170104   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:46.669827   51953 type.go:168] "Request Body" body=""
	I1210 05:50:46.669903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:46.670226   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:47.169875   51953 type.go:168] "Request Body" body=""
	I1210 05:50:47.169958   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:47.170385   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:47.170442   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:47.670688   51953 type.go:168] "Request Body" body=""
	I1210 05:50:47.670757   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:47.671081   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:48.169796   51953 type.go:168] "Request Body" body=""
	I1210 05:50:48.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:48.170206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:48.669926   51953 type.go:168] "Request Body" body=""
	I1210 05:50:48.670000   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:48.670320   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:49.170308   51953 type.go:168] "Request Body" body=""
	I1210 05:50:49.170376   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:49.170645   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:49.170686   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:49.670650   51953 type.go:168] "Request Body" body=""
	I1210 05:50:49.670726   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:49.671070   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:50.170763   51953 type.go:168] "Request Body" body=""
	I1210 05:50:50.170849   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:50.171249   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:50.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:50.669858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:50.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:51.169892   51953 type.go:168] "Request Body" body=""
	I1210 05:50:51.169972   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:51.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:51.670028   51953 type.go:168] "Request Body" body=""
	I1210 05:50:51.670106   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:51.670436   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:51.670489   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:52.170129   51953 type.go:168] "Request Body" body=""
	I1210 05:50:52.170197   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:52.170458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:52.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:50:52.669892   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:52.670248   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:53.169982   51953 type.go:168] "Request Body" body=""
	I1210 05:50:53.170060   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:53.170464   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:53.669749   51953 type.go:168] "Request Body" body=""
	I1210 05:50:53.669818   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:53.670085   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:54.169781   51953 type.go:168] "Request Body" body=""
	I1210 05:50:54.169858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:54.170182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:54.170236   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:54.669996   51953 type.go:168] "Request Body" body=""
	I1210 05:50:54.670086   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:54.670418   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:55.170104   51953 type.go:168] "Request Body" body=""
	I1210 05:50:55.170177   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:55.170449   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:55.669843   51953 type.go:168] "Request Body" body=""
	I1210 05:50:55.669923   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:55.670268   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:56.169975   51953 type.go:168] "Request Body" body=""
	I1210 05:50:56.170051   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:56.170388   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:56.170441   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:56.670095   51953 type.go:168] "Request Body" body=""
	I1210 05:50:56.670168   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:56.670482   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:57.169890   51953 type.go:168] "Request Body" body=""
	I1210 05:50:57.169984   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:57.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:57.669805   51953 type.go:168] "Request Body" body=""
	I1210 05:50:57.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:57.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:58.169774   51953 type.go:168] "Request Body" body=""
	I1210 05:50:58.169842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:58.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:58.669884   51953 type.go:168] "Request Body" body=""
	I1210 05:50:58.669959   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:58.670302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:58.670355   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:59.170179   51953 type.go:168] "Request Body" body=""
	I1210 05:50:59.170260   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:59.170621   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:59.670326   51953 type.go:168] "Request Body" body=""
	I1210 05:50:59.670400   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:59.670713   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:00.170597   51953 type.go:168] "Request Body" body=""
	I1210 05:51:00.170676   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:00.171062   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:00.670393   51953 type.go:168] "Request Body" body=""
	I1210 05:51:00.670470   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:00.670779   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:00.670859   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:01.170104   51953 type.go:168] "Request Body" body=""
	I1210 05:51:01.170176   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:01.170534   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:01.670409   51953 type.go:168] "Request Body" body=""
	I1210 05:51:01.670489   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:01.670793   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:02.170600   51953 type.go:168] "Request Body" body=""
	I1210 05:51:02.170681   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:02.171034   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:02.669707   51953 type.go:168] "Request Body" body=""
	I1210 05:51:02.669777   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:02.670050   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:03.169821   51953 type.go:168] "Request Body" body=""
	I1210 05:51:03.169928   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:03.170262   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:03.170336   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:03.669876   51953 type.go:168] "Request Body" body=""
	I1210 05:51:03.669952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:03.670257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:04.169761   51953 type.go:168] "Request Body" body=""
	I1210 05:51:04.169839   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:04.170116   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:04.670079   51953 type.go:168] "Request Body" body=""
	I1210 05:51:04.670153   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:04.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:05.170179   51953 type.go:168] "Request Body" body=""
	I1210 05:51:05.170252   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:05.170541   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:05.170585   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:05.670315   51953 type.go:168] "Request Body" body=""
	I1210 05:51:05.670404   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:05.670663   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:06.170465   51953 type.go:168] "Request Body" body=""
	I1210 05:51:06.170568   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:06.170894   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:06.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:51:06.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:06.671114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:07.169747   51953 type.go:168] "Request Body" body=""
	I1210 05:51:07.169823   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:07.170100   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:07.669844   51953 type.go:168] "Request Body" body=""
	I1210 05:51:07.669918   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:07.670193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:07.670235   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:08.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:51:08.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:08.170261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:08.669794   51953 type.go:168] "Request Body" body=""
	I1210 05:51:08.669861   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:08.670162   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:09.170071   51953 type.go:168] "Request Body" body=""
	I1210 05:51:09.170149   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:09.170495   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:09.670204   51953 type.go:168] "Request Body" body=""
	I1210 05:51:09.670276   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:09.670598   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:09.670653   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:10.169774   51953 type.go:168] "Request Body" body=""
	I1210 05:51:10.169872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:10.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:10.669859   51953 type.go:168] "Request Body" body=""
	I1210 05:51:10.669948   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:10.670307   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:11.169790   51953 type.go:168] "Request Body" body=""
	I1210 05:51:11.169861   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:11.170177   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:11.669711   51953 type.go:168] "Request Body" body=""
	I1210 05:51:11.669789   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:11.670102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:12.169688   51953 type.go:168] "Request Body" body=""
	I1210 05:51:12.169769   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:12.170083   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:12.170143   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:12.669821   51953 type.go:168] "Request Body" body=""
	I1210 05:51:12.669925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:12.670244   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:13.170587   51953 type.go:168] "Request Body" body=""
	I1210 05:51:13.170667   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:13.170931   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:13.670750   51953 type.go:168] "Request Body" body=""
	I1210 05:51:13.670830   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:13.671209   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:14.169828   51953 type.go:168] "Request Body" body=""
	I1210 05:51:14.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:14.170243   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:14.170295   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:14.670005   51953 type.go:168] "Request Body" body=""
	I1210 05:51:14.670076   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:14.670345   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:15.170019   51953 type.go:168] "Request Body" body=""
	I1210 05:51:15.170092   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:15.170405   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:15.670090   51953 type.go:168] "Request Body" body=""
	I1210 05:51:15.670164   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:15.670488   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:16.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:51:16.169830   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:16.170154   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:16.669884   51953 type.go:168] "Request Body" body=""
	I1210 05:51:16.669954   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:16.670321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:16.670378   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:17.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:51:17.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:17.170203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:17.669857   51953 type.go:168] "Request Body" body=""
	I1210 05:51:17.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:17.670182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:18.169839   51953 type.go:168] "Request Body" body=""
	I1210 05:51:18.169915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:18.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:18.669814   51953 type.go:168] "Request Body" body=""
	I1210 05:51:18.669894   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:18.670212   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:19.170018   51953 type.go:168] "Request Body" body=""
	I1210 05:51:19.170103   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:19.170385   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:19.170435   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:19.670213   51953 type.go:168] "Request Body" body=""
	I1210 05:51:19.670290   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:19.670634   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:20.170418   51953 type.go:168] "Request Body" body=""
	I1210 05:51:20.170505   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:20.170868   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:20.670496   51953 type.go:168] "Request Body" body=""
	I1210 05:51:20.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:20.670919   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:21.170726   51953 type.go:168] "Request Body" body=""
	I1210 05:51:21.170805   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:21.171135   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:21.171197   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:21.669812   51953 type.go:168] "Request Body" body=""
	I1210 05:51:21.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:21.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:22.169804   51953 type.go:168] "Request Body" body=""
	I1210 05:51:22.169880   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:22.170180   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:22.669830   51953 type.go:168] "Request Body" body=""
	I1210 05:51:22.669902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:22.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:23.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:51:23.169901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:23.170299   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:23.669871   51953 type.go:168] "Request Body" body=""
	I1210 05:51:23.669940   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:23.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:23.670238   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:24.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:51:24.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:24.170239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:24.670061   51953 type.go:168] "Request Body" body=""
	I1210 05:51:24.670134   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:24.670493   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:25.169972   51953 type.go:168] "Request Body" body=""
	I1210 05:51:25.170044   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:25.170325   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:25.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:51:25.669907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:25.670245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:25.670298   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:26.170334   51953 type.go:168] "Request Body" body=""
	I1210 05:51:26.170405   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:26.170720   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:26.670508   51953 type.go:168] "Request Body" body=""
	I1210 05:51:26.670574   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:26.670837   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:27.170610   51953 type.go:168] "Request Body" body=""
	I1210 05:51:27.170687   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:27.171034   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:27.670641   51953 type.go:168] "Request Body" body=""
	I1210 05:51:27.670716   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:27.671052   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:27.671107   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:28.170507   51953 type.go:168] "Request Body" body=""
	I1210 05:51:28.170587   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:28.170917   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:28.670721   51953 type.go:168] "Request Body" body=""
	I1210 05:51:28.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:28.671160   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:29.170214   51953 type.go:168] "Request Body" body=""
	I1210 05:51:29.170299   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:29.170609   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:29.670390   51953 type.go:168] "Request Body" body=""
	I1210 05:51:29.670459   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:29.670758   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:30.170562   51953 type.go:168] "Request Body" body=""
	I1210 05:51:30.170660   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:30.171075   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:30.171137   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:30.670749   51953 type.go:168] "Request Body" body=""
	I1210 05:51:30.670827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:30.671196   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:31.169761   51953 type.go:168] "Request Body" body=""
	I1210 05:51:31.169835   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:31.170191   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:31.669883   51953 type.go:168] "Request Body" body=""
	I1210 05:51:31.669961   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:31.670300   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:32.169880   51953 type.go:168] "Request Body" body=""
	I1210 05:51:32.169952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:32.170273   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:32.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:51:32.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:32.670089   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:32.670131   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:33.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:51:33.169851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:33.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:33.669798   51953 type.go:168] "Request Body" body=""
	I1210 05:51:33.669868   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:33.670181   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:34.169753   51953 type.go:168] "Request Body" body=""
	I1210 05:51:34.169825   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:34.170091   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:34.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:51:34.669907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:34.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:34.670277   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:35.169820   51953 type.go:168] "Request Body" body=""
	I1210 05:51:35.169944   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:35.170278   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:35.669951   51953 type.go:168] "Request Body" body=""
	I1210 05:51:35.670023   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:35.670279   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:36.169953   51953 type.go:168] "Request Body" body=""
	I1210 05:51:36.170025   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:36.170358   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:36.670063   51953 type.go:168] "Request Body" body=""
	I1210 05:51:36.670133   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:36.670464   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:36.670516   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:37.170022   51953 type.go:168] "Request Body" body=""
	I1210 05:51:37.170090   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:37.170409   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:37.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:51:37.669901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:37.670272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:38.169963   51953 type.go:168] "Request Body" body=""
	I1210 05:51:38.170040   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:38.170377   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:38.669747   51953 type.go:168] "Request Body" body=""
	I1210 05:51:38.669816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:38.670139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:39.170160   51953 type.go:168] "Request Body" body=""
	I1210 05:51:39.170230   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:39.170516   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:39.170556   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:39.670447   51953 type.go:168] "Request Body" body=""
	I1210 05:51:39.670519   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:39.670875   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:40.170718   51953 type.go:168] "Request Body" body=""
	I1210 05:51:40.170785   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:40.171078   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:40.670706   51953 type.go:168] "Request Body" body=""
	I1210 05:51:40.670781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:40.671102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:41.169776   51953 type.go:168] "Request Body" body=""
	I1210 05:51:41.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:41.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:41.670004   51953 type.go:168] "Request Body" body=""
	I1210 05:51:41.670161   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:41.670773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:41.670875   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:42.170771   51953 type.go:168] "Request Body" body=""
	I1210 05:51:42.170847   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:42.171213   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:42.669910   51953 type.go:168] "Request Body" body=""
	I1210 05:51:42.669982   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:42.670284   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:43.169986   51953 type.go:168] "Request Body" body=""
	I1210 05:51:43.170065   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:43.170327   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:43.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:51:43.669901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:43.670214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:44.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:51:44.169896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:44.170247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:44.170307   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:44.670143   51953 type.go:168] "Request Body" body=""
	I1210 05:51:44.670220   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:44.670489   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:45.170378   51953 type.go:168] "Request Body" body=""
	I1210 05:51:45.170522   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:45.171164   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:45.669904   51953 type.go:168] "Request Body" body=""
	I1210 05:51:45.669977   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:45.670266   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:46.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:51:46.169981   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:46.170247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:46.669982   51953 type.go:168] "Request Body" body=""
	I1210 05:51:46.670065   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:46.670412   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:46.670463   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:47.170121   51953 type.go:168] "Request Body" body=""
	I1210 05:51:47.170196   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:47.170526   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:47.670279   51953 type.go:168] "Request Body" body=""
	I1210 05:51:47.670353   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:47.670622   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:48.170397   51953 type.go:168] "Request Body" body=""
	I1210 05:51:48.170475   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:48.170792   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:48.670571   51953 type.go:168] "Request Body" body=""
	I1210 05:51:48.670649   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:48.670997   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:48.671073   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:49.170679   51953 type.go:168] "Request Body" body=""
	I1210 05:51:49.170753   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:49.171102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:49.670157   51953 type.go:168] "Request Body" body=""
	I1210 05:51:49.670228   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:49.670552   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:50.170360   51953 type.go:168] "Request Body" body=""
	I1210 05:51:50.170435   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:50.170752   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:50.670554   51953 type.go:168] "Request Body" body=""
	I1210 05:51:50.670636   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:50.670942   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:51.170729   51953 type.go:168] "Request Body" body=""
	I1210 05:51:51.170802   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:51.171139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:51.171187   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:51.669733   51953 type.go:168] "Request Body" body=""
	I1210 05:51:51.669807   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:51.670146   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:52.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:51:52.169929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:52.170284   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:52.669819   51953 type.go:168] "Request Body" body=""
	I1210 05:51:52.669893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:52.670207   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:53.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:51:53.169992   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:53.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:53.669991   51953 type.go:168] "Request Body" body=""
	I1210 05:51:53.670070   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:53.670340   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:53.670380   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:54.170031   51953 type.go:168] "Request Body" body=""
	I1210 05:51:54.170110   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:54.170441   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:54.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:51:54.670202   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:54.670529   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:55.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:51:55.169832   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:55.170177   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:55.669779   51953 type.go:168] "Request Body" body=""
	I1210 05:51:55.669851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:55.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:56.169901   51953 type.go:168] "Request Body" body=""
	I1210 05:51:56.169974   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:56.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:56.170373   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:56.669742   51953 type.go:168] "Request Body" body=""
	I1210 05:51:56.669816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:56.670103   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:57.169781   51953 type.go:168] "Request Body" body=""
	I1210 05:51:57.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:57.170181   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:57.669889   51953 type.go:168] "Request Body" body=""
	I1210 05:51:57.669965   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:57.670291   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:58.170689   51953 type.go:168] "Request Body" body=""
	I1210 05:51:58.170758   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:58.171080   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:58.171123   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:58.669796   51953 type.go:168] "Request Body" body=""
	I1210 05:51:58.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:58.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:59.170073   51953 type.go:168] "Request Body" body=""
	I1210 05:51:59.170150   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:59.170489   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:59.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:51:59.670202   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:59.670565   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:00.170445   51953 type.go:168] "Request Body" body=""
	I1210 05:52:00.170546   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:00.170880   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:00.669804   51953 type.go:168] "Request Body" body=""
	I1210 05:52:00.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:00.670208   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:00.670259   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:01.169772   51953 type.go:168] "Request Body" body=""
	I1210 05:52:01.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:01.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:01.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:52:01.670097   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:01.670433   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:02.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:52:02.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:02.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:02.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:52:02.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:02.670276   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:02.670355   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:03.170035   51953 type.go:168] "Request Body" body=""
	I1210 05:52:03.170108   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:03.170401   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:03.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:52:03.669890   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:03.670202   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:04.169758   51953 type.go:168] "Request Body" body=""
	I1210 05:52:04.169836   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:04.170117   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:04.670079   51953 type.go:168] "Request Body" body=""
	I1210 05:52:04.670164   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:04.670516   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:04.670563   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:05.169839   51953 type.go:168] "Request Body" body=""
	I1210 05:52:05.169935   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:05.170260   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:05.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:52:05.669823   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:05.670097   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:06.169800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:06.169870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:06.170195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:06.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:52:06.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:06.670297   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:07.169768   51953 type.go:168] "Request Body" body=""
	I1210 05:52:07.169841   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:07.170149   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:07.170196   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:07.669845   51953 type.go:168] "Request Body" body=""
	I1210 05:52:07.669915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:07.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:08.169973   51953 type.go:168] "Request Body" body=""
	I1210 05:52:08.170047   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:08.170399   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:08.670082   51953 type.go:168] "Request Body" body=""
	I1210 05:52:08.670165   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:08.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:09.170372   51953 type.go:168] "Request Body" body=""
	I1210 05:52:09.170444   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:09.170740   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:09.170790   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:09.670553   51953 type.go:168] "Request Body" body=""
	I1210 05:52:09.670631   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:09.670948   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:10.170667   51953 type.go:168] "Request Body" body=""
	I1210 05:52:10.170738   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:10.170996   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:10.669729   51953 type.go:168] "Request Body" body=""
	I1210 05:52:10.669805   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:10.670126   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:11.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:52:11.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:11.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:11.669912   51953 type.go:168] "Request Body" body=""
	I1210 05:52:11.669979   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:11.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:11.670280   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:12.169939   51953 type.go:168] "Request Body" body=""
	I1210 05:52:12.170014   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:12.170362   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:12.670078   51953 type.go:168] "Request Body" body=""
	I1210 05:52:12.670162   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:12.670514   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:13.169756   51953 type.go:168] "Request Body" body=""
	I1210 05:52:13.169834   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:13.170093   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:13.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:52:13.669896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:13.670227   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:14.169820   51953 type.go:168] "Request Body" body=""
	I1210 05:52:14.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:14.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:14.170294   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:14.670030   51953 type.go:168] "Request Body" body=""
	I1210 05:52:14.670095   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:14.670375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:15.170120   51953 type.go:168] "Request Body" body=""
	I1210 05:52:15.170196   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:15.170539   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:15.670302   51953 type.go:168] "Request Body" body=""
	I1210 05:52:15.670373   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:15.670676   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:16.170432   51953 type.go:168] "Request Body" body=""
	I1210 05:52:16.170507   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:16.170803   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:16.170857   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:16.670503   51953 type.go:168] "Request Body" body=""
	I1210 05:52:16.670576   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:16.670887   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:17.170709   51953 type.go:168] "Request Body" body=""
	I1210 05:52:17.170781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:17.171089   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:17.669750   51953 type.go:168] "Request Body" body=""
	I1210 05:52:17.669817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:17.670129   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:18.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:52:18.169909   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:18.170246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:18.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:52:18.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:18.670224   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:18.670276   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:19.170163   51953 type.go:168] "Request Body" body=""
	I1210 05:52:19.170242   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:19.170554   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:19.670487   51953 type.go:168] "Request Body" body=""
	I1210 05:52:19.670569   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:19.670973   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:20.169737   51953 type.go:168] "Request Body" body=""
	I1210 05:52:20.169824   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:20.170206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:20.669861   51953 type.go:168] "Request Body" body=""
	I1210 05:52:20.669938   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:20.670209   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:21.169833   51953 type.go:168] "Request Body" body=""
	I1210 05:52:21.169904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:21.170238   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:21.170290   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:21.669832   51953 type.go:168] "Request Body" body=""
	I1210 05:52:21.669911   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:21.670234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:22.169913   51953 type.go:168] "Request Body" body=""
	I1210 05:52:22.169983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:22.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:22.669812   51953 type.go:168] "Request Body" body=""
	I1210 05:52:22.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:22.670179   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:23.169847   51953 type.go:168] "Request Body" body=""
	I1210 05:52:23.169930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:23.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:23.669962   51953 type.go:168] "Request Body" body=""
	I1210 05:52:23.670037   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:23.670326   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:23.670367   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:24.170037   51953 type.go:168] "Request Body" body=""
	I1210 05:52:24.170109   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:24.170439   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:24.670168   51953 type.go:168] "Request Body" body=""
	I1210 05:52:24.670241   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:24.670573   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:25.170350   51953 type.go:168] "Request Body" body=""
	I1210 05:52:25.170421   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:25.170687   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:25.670431   51953 type.go:168] "Request Body" body=""
	I1210 05:52:25.670504   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:25.670821   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:25.670873   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:26.170481   51953 type.go:168] "Request Body" body=""
	I1210 05:52:26.170555   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:26.170912   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:26.670658   51953 type.go:168] "Request Body" body=""
	I1210 05:52:26.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:26.670998   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:27.169719   51953 type.go:168] "Request Body" body=""
	I1210 05:52:27.169797   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:27.170122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:27.669792   51953 type.go:168] "Request Body" body=""
	I1210 05:52:27.669870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:27.670184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:28.169778   51953 type.go:168] "Request Body" body=""
	I1210 05:52:28.169852   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:28.170172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:28.170229   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:28.669766   51953 type.go:168] "Request Body" body=""
	I1210 05:52:28.669838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:28.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:29.170045   51953 type.go:168] "Request Body" body=""
	I1210 05:52:29.170125   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:29.170415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:29.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:52:29.670193   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:29.670453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:30.170123   51953 type.go:168] "Request Body" body=""
	I1210 05:52:30.170199   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:30.170559   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:30.170635   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:30.670127   51953 type.go:168] "Request Body" body=""
	I1210 05:52:30.670200   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:30.670509   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:31.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:52:31.169839   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:31.170095   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:31.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:31.669875   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:31.670200   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:32.169800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:32.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:32.170198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:32.669769   51953 type.go:168] "Request Body" body=""
	I1210 05:52:32.669851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:32.670162   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:32.670212   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:33.169842   51953 type.go:168] "Request Body" body=""
	I1210 05:52:33.169915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:33.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:33.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:52:33.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:33.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:34.169925   51953 type.go:168] "Request Body" body=""
	I1210 05:52:34.170009   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:34.170331   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:34.670116   51953 type.go:168] "Request Body" body=""
	I1210 05:52:34.670194   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:34.670515   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:34.670571   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:35.170367   51953 type.go:168] "Request Body" body=""
	I1210 05:52:35.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:35.170782   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:35.670577   51953 type.go:168] "Request Body" body=""
	I1210 05:52:35.670647   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:35.670912   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:36.170722   51953 type.go:168] "Request Body" body=""
	I1210 05:52:36.170802   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:36.171183   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:36.669843   51953 type.go:168] "Request Body" body=""
	I1210 05:52:36.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:36.670291   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:37.170702   51953 type.go:168] "Request Body" body=""
	I1210 05:52:37.170771   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:37.171105   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:37.171165   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:37.669824   51953 type.go:168] "Request Body" body=""
	I1210 05:52:37.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:37.670242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:38.169838   51953 type.go:168] "Request Body" body=""
	I1210 05:52:38.169909   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:38.170276   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:38.669712   51953 type.go:168] "Request Body" body=""
	I1210 05:52:38.669779   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:38.670087   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:39.169961   51953 type.go:168] "Request Body" body=""
	I1210 05:52:39.170037   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:39.170366   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:39.670236   51953 type.go:168] "Request Body" body=""
	I1210 05:52:39.670306   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:39.670633   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:39.670687   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:40.170413   51953 type.go:168] "Request Body" body=""
	I1210 05:52:40.170482   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:40.170769   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:40.670553   51953 type.go:168] "Request Body" body=""
	I1210 05:52:40.670623   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:40.670995   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:41.169710   51953 type.go:168] "Request Body" body=""
	I1210 05:52:41.169781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:41.170119   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:41.669770   51953 type.go:168] "Request Body" body=""
	I1210 05:52:41.669843   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:41.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:42.169882   51953 type.go:168] "Request Body" body=""
	I1210 05:52:42.169973   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:42.170386   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:42.170462   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:42.670156   51953 type.go:168] "Request Body" body=""
	I1210 05:52:42.670236   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:42.670580   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:43.170323   51953 type.go:168] "Request Body" body=""
	I1210 05:52:43.170400   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:43.170660   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:43.670466   51953 type.go:168] "Request Body" body=""
	I1210 05:52:43.670547   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:43.670868   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:44.170674   51953 type.go:168] "Request Body" body=""
	I1210 05:52:44.170752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:44.171091   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:44.171150   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:44.670034   51953 type.go:168] "Request Body" body=""
	I1210 05:52:44.670107   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:44.670403   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:45.170340   51953 type.go:168] "Request Body" body=""
	I1210 05:52:45.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:45.170941   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:45.670006   51953 type.go:168] "Request Body" body=""
	I1210 05:52:45.670100   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:45.670436   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:46.169753   51953 type.go:168] "Request Body" body=""
	I1210 05:52:46.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:46.170099   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:46.669811   51953 type.go:168] "Request Body" body=""
	I1210 05:52:46.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:46.670235   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:46.670295   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:47.169851   51953 type.go:168] "Request Body" body=""
	I1210 05:52:47.169946   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:47.170312   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:47.669774   51953 type.go:168] "Request Body" body=""
	I1210 05:52:47.669840   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:47.670114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:48.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:52:48.169897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:48.170265   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:48.669972   51953 type.go:168] "Request Body" body=""
	I1210 05:52:48.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:48.670364   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:48.670428   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:49.170294   51953 type.go:168] "Request Body" body=""
	I1210 05:52:49.170370   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:49.170634   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:49.670676   51953 type.go:168] "Request Body" body=""
	I1210 05:52:49.670754   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:49.671078   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:50.169788   51953 type.go:168] "Request Body" body=""
	I1210 05:52:50.169866   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:50.170203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:50.669782   51953 type.go:168] "Request Body" body=""
	I1210 05:52:50.669858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:50.670155   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:51.169834   51953 type.go:168] "Request Body" body=""
	I1210 05:52:51.169907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:51.170263   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:51.170321   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:51.670020   51953 type.go:168] "Request Body" body=""
	I1210 05:52:51.670091   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:51.670371   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:52.170061   51953 type.go:168] "Request Body" body=""
	I1210 05:52:52.170132   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:52.170447   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:52.669850   51953 type.go:168] "Request Body" body=""
	I1210 05:52:52.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:52.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:53.169932   51953 type.go:168] "Request Body" body=""
	I1210 05:52:53.170009   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:53.170341   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:53.170397   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:53.670032   51953 type.go:168] "Request Body" body=""
	I1210 05:52:53.670105   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:53.670414   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:54.169881   51953 type.go:168] "Request Body" body=""
	I1210 05:52:54.169952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:54.170302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:54.670115   51953 type.go:168] "Request Body" body=""
	I1210 05:52:54.670186   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:54.670529   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:55.170256   51953 type.go:168] "Request Body" body=""
	I1210 05:52:55.170339   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:55.170657   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:55.170714   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:55.670524   51953 type.go:168] "Request Body" body=""
	I1210 05:52:55.670595   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:55.670950   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:56.170826   51953 type.go:168] "Request Body" body=""
	I1210 05:52:56.170903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:56.171240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:56.669759   51953 type.go:168] "Request Body" body=""
	I1210 05:52:56.669835   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:56.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:57.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:52:57.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:57.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:57.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:52:57.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:57.670255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:57.670314   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:58.170422   51953 type.go:168] "Request Body" body=""
	I1210 05:52:58.170495   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:58.170808   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:58.670565   51953 type.go:168] "Request Body" body=""
	I1210 05:52:58.670637   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:58.670958   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:59.170654   51953 type.go:168] "Request Body" body=""
	I1210 05:52:59.170728   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:59.171071   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:59.669939   51953 type.go:168] "Request Body" body=""
	I1210 05:52:59.670007   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:59.670301   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:59.670342   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:00.169895   51953 type.go:168] "Request Body" body=""
	I1210 05:53:00.169990   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:00.170313   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:00.669978   51953 type.go:168] "Request Body" body=""
	I1210 05:53:00.670066   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:00.670428   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:01.169993   51953 type.go:168] "Request Body" body=""
	I1210 05:53:01.170074   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:01.170371   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:01.669848   51953 type.go:168] "Request Body" body=""
	I1210 05:53:01.669929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:01.670400   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:01.670459   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:02.169969   51953 type.go:168] "Request Body" body=""
	I1210 05:53:02.170067   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:02.170453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:02.670163   51953 type.go:168] "Request Body" body=""
	I1210 05:53:02.670231   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:02.670544   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:03.170337   51953 type.go:168] "Request Body" body=""
	I1210 05:53:03.170414   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:03.170717   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:03.670398   51953 type.go:168] "Request Body" body=""
	I1210 05:53:03.670477   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:03.670784   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:03.670830   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:04.170568   51953 type.go:168] "Request Body" body=""
	I1210 05:53:04.170643   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:04.170962   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:04.669739   51953 type.go:168] "Request Body" body=""
	I1210 05:53:04.669820   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:04.670122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:05.169736   51953 type.go:168] "Request Body" body=""
	I1210 05:53:05.169812   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:05.170142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:05.669731   51953 type.go:168] "Request Body" body=""
	I1210 05:53:05.669801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:05.670088   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:06.169847   51953 type.go:168] "Request Body" body=""
	I1210 05:53:06.169919   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:06.170255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:06.170309   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:06.669839   51953 type.go:168] "Request Body" body=""
	I1210 05:53:06.669932   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:06.670258   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:07.169929   51953 type.go:168] "Request Body" body=""
	I1210 05:53:07.170019   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:07.170306   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:07.670053   51953 type.go:168] "Request Body" body=""
	I1210 05:53:07.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:07.670495   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:08.170204   51953 type.go:168] "Request Body" body=""
	I1210 05:53:08.170277   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:08.170612   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:08.170669   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:08.670407   51953 type.go:168] "Request Body" body=""
	I1210 05:53:08.670480   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:08.670802   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:09.170597   51953 type.go:168] "Request Body" body=""
	I1210 05:53:09.170672   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:09.171003   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:09.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:53:09.669899   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:09.670277   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:10.169910   51953 type.go:168] "Request Body" body=""
	I1210 05:53:10.169986   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:10.170270   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:10.669995   51953 type.go:168] "Request Body" body=""
	I1210 05:53:10.670072   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:10.670375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:10.670419   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:11.169900   51953 type.go:168] "Request Body" body=""
	I1210 05:53:11.169978   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:11.170309   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:11.669761   51953 type.go:168] "Request Body" body=""
	I1210 05:53:11.669842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:11.670160   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:12.169855   51953 type.go:168] "Request Body" body=""
	I1210 05:53:12.169929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:12.170311   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:12.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:53:12.669877   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:12.670203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:13.169791   51953 type.go:168] "Request Body" body=""
	I1210 05:53:13.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:13.170213   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:13.170268   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:13.669861   51953 type.go:168] "Request Body" body=""
	I1210 05:53:13.669935   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:13.670288   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:14.169996   51953 type.go:168] "Request Body" body=""
	I1210 05:53:14.170071   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:14.170413   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:14.670109   51953 type.go:168] "Request Body" body=""
	I1210 05:53:14.670182   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:14.670458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:15.169830   51953 type.go:168] "Request Body" body=""
	I1210 05:53:15.169954   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:15.170288   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:15.170343   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:15.669901   51953 type.go:168] "Request Body" body=""
	I1210 05:53:15.669977   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:15.670322   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:16.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:53:16.169825   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:16.170141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:16.669852   51953 type.go:168] "Request Body" body=""
	I1210 05:53:16.669933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:16.670259   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:17.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:53:17.169896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:17.170232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:17.669789   51953 type.go:168] "Request Body" body=""
	I1210 05:53:17.669855   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:17.670121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:17.670179   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:18.169880   51953 type.go:168] "Request Body" body=""
	I1210 05:53:18.169953   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:18.170308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:18.670029   51953 type.go:168] "Request Body" body=""
	I1210 05:53:18.670125   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:18.670459   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:19.170387   51953 type.go:168] "Request Body" body=""
	I1210 05:53:19.170452   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:19.170715   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:19.670679   51953 type.go:168] "Request Body" body=""
	I1210 05:53:19.670747   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:19.671074   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:19.671133   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:20.169842   51953 type.go:168] "Request Body" body=""
	I1210 05:53:20.169925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:20.170257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:20.669763   51953 type.go:168] "Request Body" body=""
	I1210 05:53:20.669856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:20.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:21.169837   51953 type.go:168] "Request Body" body=""
	I1210 05:53:21.169912   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:21.170234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:21.669987   51953 type.go:168] "Request Body" body=""
	I1210 05:53:21.670060   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:21.670390   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:22.170082   51953 type.go:168] "Request Body" body=""
	I1210 05:53:22.170158   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:22.170445   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:22.170499   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:22.669862   51953 type.go:168] "Request Body" body=""
	I1210 05:53:22.669943   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:22.670295   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:23.169959   51953 type.go:168] "Request Body" body=""
	I1210 05:53:23.170036   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:23.170370   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:23.669779   51953 type.go:168] "Request Body" body=""
	I1210 05:53:23.669847   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:23.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:24.169812   51953 type.go:168] "Request Body" body=""
	I1210 05:53:24.169882   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:24.170274   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:24.670128   51953 type.go:168] "Request Body" body=""
	I1210 05:53:24.670208   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:24.670549   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:24.670605   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:25.170307   51953 type.go:168] "Request Body" body=""
	I1210 05:53:25.170382   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:25.170719   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:25.670478   51953 type.go:168] "Request Body" body=""
	I1210 05:53:25.670547   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:25.670828   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:26.170599   51953 type.go:168] "Request Body" body=""
	I1210 05:53:26.170671   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:26.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:26.669709   51953 type.go:168] "Request Body" body=""
	I1210 05:53:26.669782   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:26.670054   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:27.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:53:27.169831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:27.170139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:27.170198   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:27.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:53:27.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:27.670219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:28.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:53:28.169828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:28.170132   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:28.669819   51953 type.go:168] "Request Body" body=""
	I1210 05:53:28.669888   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:28.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:29.170163   51953 type.go:168] "Request Body" body=""
	I1210 05:53:29.170243   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:29.170572   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:29.170631   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:29.670270   51953 type.go:168] "Request Body" body=""
	I1210 05:53:29.670337   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:29.670607   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:30.170496   51953 type.go:168] "Request Body" body=""
	I1210 05:53:30.170584   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:30.170947   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:30.670768   51953 type.go:168] "Request Body" body=""
	I1210 05:53:30.670854   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:30.671206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:31.169798   51953 type.go:168] "Request Body" body=""
	I1210 05:53:31.169872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:31.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:31.669871   51953 type.go:168] "Request Body" body=""
	I1210 05:53:31.669951   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:31.670321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:31.670376   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:32.169884   51953 type.go:168] "Request Body" body=""
	I1210 05:53:32.169959   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:32.170251   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:32.669923   51953 type.go:168] "Request Body" body=""
	I1210 05:53:32.669991   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:32.670335   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:33.169817   51953 type.go:168] "Request Body" body=""
	I1210 05:53:33.169889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:33.170220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:33.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:53:33.669885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:33.670198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:34.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:53:34.169833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:34.170101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:34.170150   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:34.670055   51953 type.go:168] "Request Body" body=""
	I1210 05:53:34.670124   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:34.670458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:35.170164   51953 type.go:168] "Request Body" body=""
	I1210 05:53:35.170239   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:35.170615   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:35.670406   51953 type.go:168] "Request Body" body=""
	I1210 05:53:35.670480   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:35.670747   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:36.170521   51953 type.go:168] "Request Body" body=""
	I1210 05:53:36.170600   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:36.170924   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:36.170976   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:36.670598   51953 type.go:168] "Request Body" body=""
	I1210 05:53:36.670673   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:36.671006   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:37.170525   51953 type.go:168] "Request Body" body=""
	I1210 05:53:37.170598   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:37.170929   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:37.670698   51953 type.go:168] "Request Body" body=""
	I1210 05:53:37.670771   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:37.671111   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:38.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:53:38.169821   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:38.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:38.670414   51953 type.go:168] "Request Body" body=""
	I1210 05:53:38.670482   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:38.670791   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:38.670843   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:39.170611   51953 type.go:168] "Request Body" body=""
	I1210 05:53:39.170682   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:39.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:39.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:53:39.669833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:39.670145   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:40.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:53:40.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:40.170087   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:40.669801   51953 type.go:168] "Request Body" body=""
	I1210 05:53:40.669881   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:40.670219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:41.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:53:41.169995   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:41.170355   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:41.170412   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:41.670056   51953 type.go:168] "Request Body" body=""
	I1210 05:53:41.670122   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:41.670440   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:42.169858   51953 type.go:168] "Request Body" body=""
	I1210 05:53:42.169947   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:42.170336   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:42.670088   51953 type.go:168] "Request Body" body=""
	I1210 05:53:42.670163   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:42.670484   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:43.170162   51953 type.go:168] "Request Body" body=""
	I1210 05:53:43.170230   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:43.170547   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:43.170609   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:43.670381   51953 type.go:168] "Request Body" body=""
	I1210 05:53:43.670459   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:43.670797   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:44.170478   51953 type.go:168] "Request Body" body=""
	I1210 05:53:44.170553   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:44.170917   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:44.670710   51953 type.go:168] "Request Body" body=""
	I1210 05:53:44.670781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:44.671096   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:45.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:53:45.169927   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:45.170248   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:45.670167   51953 type.go:168] "Request Body" body=""
	I1210 05:53:45.670243   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:45.670596   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:45.670654   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:46.170356   51953 type.go:168] "Request Body" body=""
	I1210 05:53:46.170470   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:46.170775   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:46.670631   51953 type.go:168] "Request Body" body=""
	I1210 05:53:46.670706   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:46.671056   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:47.169777   51953 type.go:168] "Request Body" body=""
	I1210 05:53:47.169864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:47.170180   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:47.670484   51953 type.go:168] "Request Body" body=""
	I1210 05:53:47.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:47.670850   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:47.670896   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:48.170703   51953 type.go:168] "Request Body" body=""
	I1210 05:53:48.170773   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:48.171186   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:48.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:53:48.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:48.670270   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:49.170239   51953 type.go:168] "Request Body" body=""
	I1210 05:53:49.170314   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:49.170632   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:49.670158   51953 type.go:168] "Request Body" body=""
	I1210 05:53:49.670250   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:49.670638   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:50.170456   51953 type.go:168] "Request Body" body=""
	I1210 05:53:50.170536   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:50.170897   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:50.170949   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:50.670681   51953 type.go:168] "Request Body" body=""
	I1210 05:53:50.670750   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:50.671080   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:51.169790   51953 type.go:168] "Request Body" body=""
	I1210 05:53:51.169870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:51.170201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:51.669911   51953 type.go:168] "Request Body" body=""
	I1210 05:53:51.669983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:51.670289   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:52.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:53:52.169885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:52.170158   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:52.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:53:52.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:52.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:52.670299   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:53.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:53:53.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:53.170220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:53.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:53:53.669864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:53.670142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:54.169882   51953 type.go:168] "Request Body" body=""
	I1210 05:53:54.169960   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:54.170294   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:54.670138   51953 type.go:168] "Request Body" body=""
	I1210 05:53:54.670217   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:54.670514   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:54.670556   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:55.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:53:55.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:55.170092   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:55.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:53:55.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:55.670235   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:56.169933   51953 type.go:168] "Request Body" body=""
	I1210 05:53:56.170005   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:56.170326   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:56.669987   51953 type.go:168] "Request Body" body=""
	I1210 05:53:56.670052   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:56.670317   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:57.170000   51953 type.go:168] "Request Body" body=""
	I1210 05:53:57.170105   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:57.170463   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:57.170520   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:57.670190   51953 type.go:168] "Request Body" body=""
	I1210 05:53:57.670263   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:57.670595   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:58.170369   51953 type.go:168] "Request Body" body=""
	I1210 05:53:58.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:58.170773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:58.670583   51953 type.go:168] "Request Body" body=""
	I1210 05:53:58.670669   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:58.671047   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:59.170051   51953 type.go:168] "Request Body" body=""
	I1210 05:53:59.170137   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:59.170479   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:59.170549   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:59.669763   51953 type.go:168] "Request Body" body=""
	I1210 05:53:59.669831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:59.670493   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:54:00.169895   51953 type.go:168] "Request Body" body=""
	I1210 05:54:00.170210   51953 node_ready.go:38] duration metric: took 6m0.000621671s for node "functional-644034" to be "Ready" ...
	I1210 05:54:00.173449   51953 out.go:203] 
	W1210 05:54:00.176680   51953 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 05:54:00.176713   51953 out.go:285] * 
	W1210 05:54:00.178858   51953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:54:00.215003   51953 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 05:54:08 functional-644034 containerd[5850]: time="2025-12-10T05:54:08.049216332Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:09 functional-644034 containerd[5850]: time="2025-12-10T05:54:09.127962117Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 10 05:54:09 functional-644034 containerd[5850]: time="2025-12-10T05:54:09.130217614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 10 05:54:09 functional-644034 containerd[5850]: time="2025-12-10T05:54:09.137682683Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:09 functional-644034 containerd[5850]: time="2025-12-10T05:54:09.138174810Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.117118352Z" level=info msg="No images store for sha256:7c7a98f5977d00426b0ab442a3313f38d8159556e5fd94c8cdab70d2b3d72bfe"
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.119575436Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-644034\""
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.128559554Z" level=info msg="ImageCreate event name:\"sha256:187b8b0a3596efc82d8108da07255f790e24f4da482c7a2aa9f3e56dbd5d3e50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.129618394Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-644034\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.938450474Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.941022095Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.943336087Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.957350615Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 10 05:54:11 functional-644034 containerd[5850]: time="2025-12-10T05:54:11.985409887Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 10 05:54:11 functional-644034 containerd[5850]: time="2025-12-10T05:54:11.987603597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 10 05:54:11 functional-644034 containerd[5850]: time="2025-12-10T05:54:11.999030250Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:11 functional-644034 containerd[5850]: time="2025-12-10T05:54:11.999845731Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.020723561Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.023105509Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.025084242Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.032702978Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.168636692Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.170776379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.179474248Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.180046114Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:54:13.928544    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:13.929242    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:13.930909    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:13.931422    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:13.932945    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 05:54:13 up 36 min,  0 user,  load average: 0.47, 0.40, 0.59
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 05:54:10 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:11 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 10 05:54:11 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:11 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:11 functional-644034 kubelet[9590]: E1210 05:54:11.708878    9590 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:11 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:11 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:12 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 10 05:54:12 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:12 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:12 functional-644034 kubelet[9682]: E1210 05:54:12.462596    9682 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:12 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:12 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:13 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 10 05:54:13 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:13 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:13 functional-644034 kubelet[9717]: E1210 05:54:13.231139    9717 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:13 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:13 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:13 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 10 05:54:13 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:13 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:13 functional-644034 kubelet[9805]: E1210 05:54:13.982222    9805 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:13 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:13 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (425.985728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-644034 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-644034 get pods: exit status 1 (104.689902ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-644034 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 2 (307.554585ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-944360 image ls --format short --alsologtostderr                                                                                           │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image ls --format yaml --alsologtostderr                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh     │ functional-944360 ssh pgrep buildkitd                                                                                                                 │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ image   │ functional-944360 image ls --format json --alsologtostderr                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image ls --format table --alsologtostderr                                                                                           │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image build -t localhost/my-image:functional-944360 testdata/build --alsologtostderr                                                │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image ls                                                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ delete  │ -p functional-944360                                                                                                                                  │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ start   │ -p functional-644034 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ start   │ -p functional-644034 --alsologtostderr -v=8                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │                     │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:latest                                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add minikube-local-cache-test:functional-644034                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache delete minikube-local-cache-test:functional-644034                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl images                                                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	│ cache   │ functional-644034 cache reload                                                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ kubectl │ functional-644034 kubectl -- --context functional-644034 get pods                                                                                     │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:47:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:47:54.556574   51953 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:47:54.556774   51953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:54.556804   51953 out.go:374] Setting ErrFile to fd 2...
	I1210 05:47:54.556824   51953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:47:54.557680   51953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:47:54.558123   51953 out.go:368] Setting JSON to false
	I1210 05:47:54.558985   51953 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1825,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:47:54.559094   51953 start.go:143] virtualization:  
	I1210 05:47:54.562634   51953 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:47:54.566518   51953 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:47:54.566592   51953 notify.go:221] Checking for updates...
	I1210 05:47:54.572379   51953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:47:54.575335   51953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:54.578363   51953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:47:54.581210   51953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:47:54.584186   51953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:47:54.587618   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:54.587759   51953 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:47:54.618368   51953 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:47:54.618493   51953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:47:54.683662   51953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 05:47:54.67215006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:47:54.683767   51953 docker.go:319] overlay module found
	I1210 05:47:54.686996   51953 out.go:179] * Using the docker driver based on existing profile
	I1210 05:47:54.689865   51953 start.go:309] selected driver: docker
	I1210 05:47:54.689883   51953 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:54.689998   51953 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:47:54.690096   51953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:47:54.769093   51953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 05:47:54.760185758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:47:54.769542   51953 cni.go:84] Creating CNI manager for ""
	I1210 05:47:54.769597   51953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:47:54.769652   51953 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:54.772754   51953 out.go:179] * Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	I1210 05:47:54.775504   51953 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 05:47:54.778330   51953 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:47:54.781109   51953 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:47:54.781186   51953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:47:54.800171   51953 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:47:54.800192   51953 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 05:47:54.839003   51953 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 05:47:55.003206   51953 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 05:47:55.003455   51953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json ...
	I1210 05:47:55.003769   51953 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:47:55.003826   51953 start.go:360] acquireMachinesLock for functional-644034: {Name:mk0dde6f976baac8ab90670ad27c806ab702c4c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.003903   51953 start.go:364] duration metric: took 49.001µs to acquireMachinesLock for "functional-644034"
	I1210 05:47:55.003933   51953 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:47:55.003940   51953 fix.go:54] fixHost starting: 
	I1210 05:47:55.004094   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.004258   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:55.028659   51953 fix.go:112] recreateIfNeeded on functional-644034: state=Running err=<nil>
	W1210 05:47:55.028694   51953 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:47:55.031932   51953 out.go:252] * Updating the running docker "functional-644034" container ...
	I1210 05:47:55.031977   51953 machine.go:94] provisionDockerMachine start ...
	I1210 05:47:55.032062   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.055133   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.055465   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.055479   51953 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:47:55.170848   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.207999   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:47:55.208023   51953 ubuntu.go:182] provisioning hostname "functional-644034"
	I1210 05:47:55.208102   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.228767   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.229073   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.229085   51953 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-644034 && echo "functional-644034" | sudo tee /etc/hostname
	I1210 05:47:55.357858   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:55.390746   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:47:55.390831   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.434495   51953 main.go:143] libmachine: Using SSH client type: native
	I1210 05:47:55.434811   51953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:47:55.434828   51953 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-644034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644034/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-644034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:47:55.523319   51953 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523359   51953 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523419   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:47:55.523430   51953 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 131.759µs
	I1210 05:47:55.523435   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 05:47:55.523445   51953 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 87.246µs
	I1210 05:47:55.523453   51953 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523438   51953 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:47:55.523449   51953 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523467   51953 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523481   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:47:55.523488   51953 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.262µs
	I1210 05:47:55.523494   51953 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:47:55.523503   51953 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523523   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 05:47:55.523531   51953 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 65.428µs
	I1210 05:47:55.523538   51953 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523542   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 05:47:55.523548   51953 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 45.473µs
	I1210 05:47:55.523554   51953 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 05:47:55.523548   51953 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523565   51953 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523317   51953 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:47:55.523599   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 05:47:55.523607   51953 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.7µs
	I1210 05:47:55.523610   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 05:47:55.523613   51953 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 05:47:55.523600   51953 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 05:47:55.523617   51953 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 70.203µs
	I1210 05:47:55.523622   51953 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 325.49µs
	I1210 05:47:55.523626   51953 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523628   51953 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 05:47:55.523644   51953 cache.go:87] Successfully saved all images to host disk.
	I1210 05:47:55.587205   51953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:47:55.587232   51953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 05:47:55.587288   51953 ubuntu.go:190] setting up certificates
	I1210 05:47:55.587298   51953 provision.go:84] configureAuth start
	I1210 05:47:55.587369   51953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:47:55.604738   51953 provision.go:143] copyHostCerts
	I1210 05:47:55.604778   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:47:55.604816   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 05:47:55.604828   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:47:55.604905   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 05:47:55.605000   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:47:55.605022   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 05:47:55.605029   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:47:55.605061   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 05:47:55.605114   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:47:55.605134   51953 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 05:47:55.605139   51953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:47:55.605169   51953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 05:47:55.605233   51953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.functional-644034 san=[127.0.0.1 192.168.49.2 functional-644034 localhost minikube]
	I1210 05:47:55.781276   51953 provision.go:177] copyRemoteCerts
	I1210 05:47:55.781365   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:47:55.781432   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.797956   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:55.902711   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 05:47:55.902771   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:47:55.919779   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 05:47:55.919840   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:47:55.936935   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 05:47:55.936994   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:47:55.953689   51953 provision.go:87] duration metric: took 366.363656ms to configureAuth
	I1210 05:47:55.953721   51953 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:47:55.953915   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:55.953927   51953 machine.go:97] duration metric: took 921.944178ms to provisionDockerMachine
	I1210 05:47:55.953936   51953 start.go:293] postStartSetup for "functional-644034" (driver="docker")
	I1210 05:47:55.953952   51953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:47:55.954004   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:47:55.954054   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:55.971130   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.075277   51953 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:47:56.078673   51953 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 05:47:56.078694   51953 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 05:47:56.078699   51953 command_runner.go:130] > VERSION_ID="12"
	I1210 05:47:56.078704   51953 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 05:47:56.078708   51953 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 05:47:56.078712   51953 command_runner.go:130] > ID=debian
	I1210 05:47:56.078717   51953 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 05:47:56.078725   51953 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 05:47:56.078732   51953 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 05:47:56.078800   51953 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:47:56.078828   51953 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:47:56.078840   51953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 05:47:56.078899   51953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 05:47:56.078986   51953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 05:47:56.078998   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1210 05:47:56.079103   51953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> hosts in /etc/test/nested/copy/4116
	I1210 05:47:56.079112   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> /etc/test/nested/copy/4116/hosts
	I1210 05:47:56.079156   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4116
	I1210 05:47:56.086554   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:47:56.104005   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts --> /etc/test/nested/copy/4116/hosts (40 bytes)
	I1210 05:47:56.121596   51953 start.go:296] duration metric: took 167.644644ms for postStartSetup
	I1210 05:47:56.121686   51953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:47:56.121728   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.138924   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.243468   51953 command_runner.go:130] > 14%
	I1210 05:47:56.243960   51953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:47:56.248281   51953 command_runner.go:130] > 169G
	I1210 05:47:56.248748   51953 fix.go:56] duration metric: took 1.244804723s for fixHost
	I1210 05:47:56.248771   51953 start.go:83] releasing machines lock for "functional-644034", held for 1.24485909s
	I1210 05:47:56.248837   51953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:47:56.266070   51953 ssh_runner.go:195] Run: cat /version.json
	I1210 05:47:56.266123   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.266146   51953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:47:56.266199   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:56.283872   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.284272   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:56.472387   51953 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 05:47:56.475023   51953 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 05:47:56.475222   51953 ssh_runner.go:195] Run: systemctl --version
	I1210 05:47:56.481051   51953 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 05:47:56.481144   51953 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 05:47:56.481557   51953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 05:47:56.485740   51953 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 05:47:56.485802   51953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:47:56.485889   51953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:47:56.493391   51953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:47:56.493413   51953 start.go:496] detecting cgroup driver to use...
	I1210 05:47:56.493443   51953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:47:56.493499   51953 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 05:47:56.508720   51953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:47:56.521711   51953 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:47:56.521777   51953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:47:56.537527   51953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:47:56.551315   51953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:47:56.656595   51953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:47:56.765354   51953 docker.go:234] disabling docker service ...
	I1210 05:47:56.765422   51953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:47:56.780352   51953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:47:56.793570   51953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:47:56.900961   51953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:47:57.025824   51953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:47:57.039104   51953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:47:57.052658   51953 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 05:47:57.053978   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.213891   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:47:57.223164   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 05:47:57.232001   51953 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:47:57.232070   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:47:57.240776   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:47:57.249302   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:47:57.258094   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:47:57.266381   51953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:47:57.274230   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:47:57.282766   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:47:57.291675   51953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:47:57.300542   51953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:47:57.307150   51953 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 05:47:57.308059   51953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:47:57.315237   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:57.433904   51953 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:47:57.552794   51953 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 05:47:57.552901   51953 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 05:47:57.556769   51953 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1210 05:47:57.556839   51953 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 05:47:57.556861   51953 command_runner.go:130] > Device: 0,73	Inode: 1614        Links: 1
	I1210 05:47:57.556893   51953 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:47:57.556921   51953 command_runner.go:130] > Access: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.556947   51953 command_runner.go:130] > Modify: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.556968   51953 command_runner.go:130] > Change: 2025-12-10 05:47:57.523755977 +0000
	I1210 05:47:57.557011   51953 command_runner.go:130] >  Birth: -
	I1210 05:47:57.557078   51953 start.go:564] Will wait 60s for crictl version
	I1210 05:47:57.557155   51953 ssh_runner.go:195] Run: which crictl
	I1210 05:47:57.560538   51953 command_runner.go:130] > /usr/local/bin/crictl
	I1210 05:47:57.560706   51953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:47:57.582482   51953 command_runner.go:130] > Version:  0.1.0
	I1210 05:47:57.582585   51953 command_runner.go:130] > RuntimeName:  containerd
	I1210 05:47:57.582609   51953 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1210 05:47:57.582715   51953 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 05:47:57.584523   51953 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 05:47:57.584650   51953 ssh_runner.go:195] Run: containerd --version
	I1210 05:47:57.601892   51953 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 05:47:57.603507   51953 ssh_runner.go:195] Run: containerd --version
	I1210 05:47:57.622429   51953 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1210 05:47:57.630007   51953 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 05:47:57.632949   51953 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:47:57.648626   51953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:47:57.652604   51953 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 05:47:57.652711   51953 kubeadm.go:884] updating cluster {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:47:57.652889   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.820648   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:57.971830   51953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:47:58.124406   51953 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:47:58.124495   51953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:47:58.146688   51953 command_runner.go:130] > {
	I1210 05:47:58.146710   51953 command_runner.go:130] >   "images":  [
	I1210 05:47:58.146724   51953 command_runner.go:130] >     {
	I1210 05:47:58.146735   51953 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1210 05:47:58.146741   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146747   51953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 05:47:58.146750   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146755   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146765   51953 command_runner.go:130] >       "size":  "8032639",
	I1210 05:47:58.146779   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146784   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146790   51953 command_runner.go:130] >     },
	I1210 05:47:58.146794   51953 command_runner.go:130] >     {
	I1210 05:47:58.146801   51953 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 05:47:58.146808   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146813   51953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 05:47:58.146817   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146821   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146830   51953 command_runner.go:130] >       "size":  "21166088",
	I1210 05:47:58.146837   51953 command_runner.go:130] >       "username":  "nonroot",
	I1210 05:47:58.146841   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146844   51953 command_runner.go:130] >     },
	I1210 05:47:58.146847   51953 command_runner.go:130] >     {
	I1210 05:47:58.146855   51953 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1210 05:47:58.146861   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146867   51953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1210 05:47:58.146873   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146878   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146885   51953 command_runner.go:130] >       "size":  "21748497",
	I1210 05:47:58.146888   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.146897   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.146904   51953 command_runner.go:130] >       },
	I1210 05:47:58.146908   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146912   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146917   51953 command_runner.go:130] >     },
	I1210 05:47:58.146925   51953 command_runner.go:130] >     {
	I1210 05:47:58.146933   51953 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1210 05:47:58.146939   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.146948   51953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1210 05:47:58.146955   51953 command_runner.go:130] >       ],
	I1210 05:47:58.146959   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.146964   51953 command_runner.go:130] >       "size":  "24690149",
	I1210 05:47:58.146967   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.146972   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.146975   51953 command_runner.go:130] >       },
	I1210 05:47:58.146979   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.146985   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.146990   51953 command_runner.go:130] >     },
	I1210 05:47:58.146996   51953 command_runner.go:130] >     {
	I1210 05:47:58.147003   51953 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1210 05:47:58.147007   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147030   51953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1210 05:47:58.147034   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147038   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147042   51953 command_runner.go:130] >       "size":  "20670083",
	I1210 05:47:58.147046   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147050   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.147056   51953 command_runner.go:130] >       },
	I1210 05:47:58.147060   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147067   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147070   51953 command_runner.go:130] >     },
	I1210 05:47:58.147081   51953 command_runner.go:130] >     {
	I1210 05:47:58.147088   51953 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1210 05:47:58.147092   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147099   51953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1210 05:47:58.147103   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147107   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147111   51953 command_runner.go:130] >       "size":  "22430795",
	I1210 05:47:58.147122   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147127   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147132   51953 command_runner.go:130] >     },
	I1210 05:47:58.147135   51953 command_runner.go:130] >     {
	I1210 05:47:58.147144   51953 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1210 05:47:58.147150   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147155   51953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1210 05:47:58.147161   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147173   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147180   51953 command_runner.go:130] >       "size":  "15403461",
	I1210 05:47:58.147183   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147187   51953 command_runner.go:130] >         "value":  "0"
	I1210 05:47:58.147190   51953 command_runner.go:130] >       },
	I1210 05:47:58.147194   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147198   51953 command_runner.go:130] >       "pinned":  false
	I1210 05:47:58.147205   51953 command_runner.go:130] >     },
	I1210 05:47:58.147208   51953 command_runner.go:130] >     {
	I1210 05:47:58.147215   51953 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 05:47:58.147221   51953 command_runner.go:130] >       "repoTags":  [
	I1210 05:47:58.147226   51953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 05:47:58.147232   51953 command_runner.go:130] >       ],
	I1210 05:47:58.147236   51953 command_runner.go:130] >       "repoDigests":  [],
	I1210 05:47:58.147248   51953 command_runner.go:130] >       "size":  "265458",
	I1210 05:47:58.147252   51953 command_runner.go:130] >       "uid":  {
	I1210 05:47:58.147256   51953 command_runner.go:130] >         "value":  "65535"
	I1210 05:47:58.147259   51953 command_runner.go:130] >       },
	I1210 05:47:58.147270   51953 command_runner.go:130] >       "username":  "",
	I1210 05:47:58.147274   51953 command_runner.go:130] >       "pinned":  true
	I1210 05:47:58.147277   51953 command_runner.go:130] >     }
	I1210 05:47:58.147282   51953 command_runner.go:130] >   ]
	I1210 05:47:58.147284   51953 command_runner.go:130] > }
	I1210 05:47:58.149521   51953 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 05:47:58.149540   51953 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:47:58.149552   51953 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1210 05:47:58.149645   51953 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:47:58.149706   51953 ssh_runner.go:195] Run: sudo crictl info
	I1210 05:47:58.176587   51953 command_runner.go:130] > {
	I1210 05:47:58.176610   51953 command_runner.go:130] >   "cniconfig": {
	I1210 05:47:58.176616   51953 command_runner.go:130] >     "Networks": [
	I1210 05:47:58.176620   51953 command_runner.go:130] >       {
	I1210 05:47:58.176624   51953 command_runner.go:130] >         "Config": {
	I1210 05:47:58.176629   51953 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1210 05:47:58.176644   51953 command_runner.go:130] >           "Name": "cni-loopback",
	I1210 05:47:58.176648   51953 command_runner.go:130] >           "Plugins": [
	I1210 05:47:58.176652   51953 command_runner.go:130] >             {
	I1210 05:47:58.176657   51953 command_runner.go:130] >               "Network": {
	I1210 05:47:58.176662   51953 command_runner.go:130] >                 "ipam": {},
	I1210 05:47:58.176673   51953 command_runner.go:130] >                 "type": "loopback"
	I1210 05:47:58.176678   51953 command_runner.go:130] >               },
	I1210 05:47:58.176687   51953 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1210 05:47:58.176691   51953 command_runner.go:130] >             }
	I1210 05:47:58.176694   51953 command_runner.go:130] >           ],
	I1210 05:47:58.176704   51953 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1210 05:47:58.176717   51953 command_runner.go:130] >         },
	I1210 05:47:58.176725   51953 command_runner.go:130] >         "IFName": "lo"
	I1210 05:47:58.176728   51953 command_runner.go:130] >       }
	I1210 05:47:58.176732   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176736   51953 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1210 05:47:58.176742   51953 command_runner.go:130] >     "PluginDirs": [
	I1210 05:47:58.176746   51953 command_runner.go:130] >       "/opt/cni/bin"
	I1210 05:47:58.176752   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176756   51953 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1210 05:47:58.176771   51953 command_runner.go:130] >     "Prefix": "eth"
	I1210 05:47:58.176775   51953 command_runner.go:130] >   },
	I1210 05:47:58.176782   51953 command_runner.go:130] >   "config": {
	I1210 05:47:58.176789   51953 command_runner.go:130] >     "cdiSpecDirs": [
	I1210 05:47:58.176793   51953 command_runner.go:130] >       "/etc/cdi",
	I1210 05:47:58.176797   51953 command_runner.go:130] >       "/var/run/cdi"
	I1210 05:47:58.176803   51953 command_runner.go:130] >     ],
	I1210 05:47:58.176807   51953 command_runner.go:130] >     "cni": {
	I1210 05:47:58.176813   51953 command_runner.go:130] >       "binDir": "",
	I1210 05:47:58.176817   51953 command_runner.go:130] >       "binDirs": [
	I1210 05:47:58.176821   51953 command_runner.go:130] >         "/opt/cni/bin"
	I1210 05:47:58.176825   51953 command_runner.go:130] >       ],
	I1210 05:47:58.176836   51953 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1210 05:47:58.176840   51953 command_runner.go:130] >       "confTemplate": "",
	I1210 05:47:58.176844   51953 command_runner.go:130] >       "ipPref": "",
	I1210 05:47:58.176850   51953 command_runner.go:130] >       "maxConfNum": 1,
	I1210 05:47:58.176854   51953 command_runner.go:130] >       "setupSerially": false,
	I1210 05:47:58.176861   51953 command_runner.go:130] >       "useInternalLoopback": false
	I1210 05:47:58.176864   51953 command_runner.go:130] >     },
	I1210 05:47:58.176874   51953 command_runner.go:130] >     "containerd": {
	I1210 05:47:58.176880   51953 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1210 05:47:58.176886   51953 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1210 05:47:58.176892   51953 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1210 05:47:58.176901   51953 command_runner.go:130] >       "runtimes": {
	I1210 05:47:58.176905   51953 command_runner.go:130] >         "runc": {
	I1210 05:47:58.176909   51953 command_runner.go:130] >           "ContainerAnnotations": null,
	I1210 05:47:58.176915   51953 command_runner.go:130] >           "PodAnnotations": null,
	I1210 05:47:58.176920   51953 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1210 05:47:58.176926   51953 command_runner.go:130] >           "cgroupWritable": false,
	I1210 05:47:58.176930   51953 command_runner.go:130] >           "cniConfDir": "",
	I1210 05:47:58.176934   51953 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1210 05:47:58.176939   51953 command_runner.go:130] >           "io_type": "",
	I1210 05:47:58.176943   51953 command_runner.go:130] >           "options": {
	I1210 05:47:58.176950   51953 command_runner.go:130] >             "BinaryName": "",
	I1210 05:47:58.176955   51953 command_runner.go:130] >             "CriuImagePath": "",
	I1210 05:47:58.176970   51953 command_runner.go:130] >             "CriuWorkPath": "",
	I1210 05:47:58.176977   51953 command_runner.go:130] >             "IoGid": 0,
	I1210 05:47:58.176981   51953 command_runner.go:130] >             "IoUid": 0,
	I1210 05:47:58.176985   51953 command_runner.go:130] >             "NoNewKeyring": false,
	I1210 05:47:58.176991   51953 command_runner.go:130] >             "Root": "",
	I1210 05:47:58.176995   51953 command_runner.go:130] >             "ShimCgroup": "",
	I1210 05:47:58.177002   51953 command_runner.go:130] >             "SystemdCgroup": false
	I1210 05:47:58.177005   51953 command_runner.go:130] >           },
	I1210 05:47:58.177011   51953 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1210 05:47:58.177019   51953 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1210 05:47:58.177023   51953 command_runner.go:130] >           "runtimePath": "",
	I1210 05:47:58.177030   51953 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1210 05:47:58.177035   51953 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1210 05:47:58.177041   51953 command_runner.go:130] >           "snapshotter": ""
	I1210 05:47:58.177044   51953 command_runner.go:130] >         }
	I1210 05:47:58.177049   51953 command_runner.go:130] >       }
	I1210 05:47:58.177052   51953 command_runner.go:130] >     },
	I1210 05:47:58.177065   51953 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1210 05:47:58.177073   51953 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1210 05:47:58.177078   51953 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1210 05:47:58.177083   51953 command_runner.go:130] >     "disableApparmor": false,
	I1210 05:47:58.177090   51953 command_runner.go:130] >     "disableHugetlbController": true,
	I1210 05:47:58.177094   51953 command_runner.go:130] >     "disableProcMount": false,
	I1210 05:47:58.177098   51953 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1210 05:47:58.177102   51953 command_runner.go:130] >     "enableCDI": true,
	I1210 05:47:58.177106   51953 command_runner.go:130] >     "enableSelinux": false,
	I1210 05:47:58.177114   51953 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1210 05:47:58.177118   51953 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1210 05:47:58.177125   51953 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1210 05:47:58.177130   51953 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1210 05:47:58.177138   51953 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1210 05:47:58.177142   51953 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1210 05:47:58.177147   51953 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1210 05:47:58.177160   51953 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1210 05:47:58.177170   51953 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1210 05:47:58.177176   51953 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1210 05:47:58.177186   51953 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1210 05:47:58.177190   51953 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1210 05:47:58.177193   51953 command_runner.go:130] >   },
	I1210 05:47:58.177197   51953 command_runner.go:130] >   "features": {
	I1210 05:47:58.177201   51953 command_runner.go:130] >     "supplemental_groups_policy": true
	I1210 05:47:58.177204   51953 command_runner.go:130] >   },
	I1210 05:47:58.177209   51953 command_runner.go:130] >   "golang": "go1.24.9",
	I1210 05:47:58.177221   51953 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 05:47:58.177233   51953 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1210 05:47:58.177237   51953 command_runner.go:130] >   "runtimeHandlers": [
	I1210 05:47:58.177246   51953 command_runner.go:130] >     {
	I1210 05:47:58.177250   51953 command_runner.go:130] >       "features": {
	I1210 05:47:58.177255   51953 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 05:47:58.177259   51953 command_runner.go:130] >         "user_namespaces": true
	I1210 05:47:58.177261   51953 command_runner.go:130] >       }
	I1210 05:47:58.177264   51953 command_runner.go:130] >     },
	I1210 05:47:58.177267   51953 command_runner.go:130] >     {
	I1210 05:47:58.177271   51953 command_runner.go:130] >       "features": {
	I1210 05:47:58.177275   51953 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1210 05:47:58.177279   51953 command_runner.go:130] >         "user_namespaces": true
	I1210 05:47:58.177282   51953 command_runner.go:130] >       },
	I1210 05:47:58.177287   51953 command_runner.go:130] >       "name": "runc"
	I1210 05:47:58.177289   51953 command_runner.go:130] >     }
	I1210 05:47:58.177293   51953 command_runner.go:130] >   ],
	I1210 05:47:58.177296   51953 command_runner.go:130] >   "status": {
	I1210 05:47:58.177300   51953 command_runner.go:130] >     "conditions": [
	I1210 05:47:58.177303   51953 command_runner.go:130] >       {
	I1210 05:47:58.177307   51953 command_runner.go:130] >         "message": "",
	I1210 05:47:58.177314   51953 command_runner.go:130] >         "reason": "",
	I1210 05:47:58.177318   51953 command_runner.go:130] >         "status": true,
	I1210 05:47:58.177329   51953 command_runner.go:130] >         "type": "RuntimeReady"
	I1210 05:47:58.177335   51953 command_runner.go:130] >       },
	I1210 05:47:58.177339   51953 command_runner.go:130] >       {
	I1210 05:47:58.177345   51953 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1210 05:47:58.177356   51953 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1210 05:47:58.177360   51953 command_runner.go:130] >         "status": false,
	I1210 05:47:58.177365   51953 command_runner.go:130] >         "type": "NetworkReady"
	I1210 05:47:58.177373   51953 command_runner.go:130] >       },
	I1210 05:47:58.177376   51953 command_runner.go:130] >       {
	I1210 05:47:58.177397   51953 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1210 05:47:58.177406   51953 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1210 05:47:58.177414   51953 command_runner.go:130] >         "status": false,
	I1210 05:47:58.177420   51953 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1210 05:47:58.177425   51953 command_runner.go:130] >       }
	I1210 05:47:58.177428   51953 command_runner.go:130] >     ]
	I1210 05:47:58.177431   51953 command_runner.go:130] >   }
	I1210 05:47:58.177434   51953 command_runner.go:130] > }
	I1210 05:47:58.177746   51953 cni.go:84] Creating CNI manager for ""
	I1210 05:47:58.177762   51953 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:47:58.177786   51953 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:47:58.177809   51953 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644034 NodeName:functional-644034 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:47:58.177931   51953 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-644034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:47:58.178005   51953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:47:58.184894   51953 command_runner.go:130] > kubeadm
	I1210 05:47:58.184912   51953 command_runner.go:130] > kubectl
	I1210 05:47:58.184916   51953 command_runner.go:130] > kubelet
	I1210 05:47:58.185786   51953 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:47:58.185866   51953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:47:58.193140   51953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 05:47:58.205426   51953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:47:58.217773   51953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 05:47:58.230424   51953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:47:58.234124   51953 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 05:47:58.234224   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:58.348721   51953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:47:58.367663   51953 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034 for IP: 192.168.49.2
	I1210 05:47:58.367683   51953 certs.go:195] generating shared ca certs ...
	I1210 05:47:58.367699   51953 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:58.367828   51953 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 05:47:58.367870   51953 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 05:47:58.367878   51953 certs.go:257] generating profile certs ...
	I1210 05:47:58.367976   51953 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key
	I1210 05:47:58.368034   51953 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c
	I1210 05:47:58.368079   51953 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key
	I1210 05:47:58.368088   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 05:47:58.368100   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 05:47:58.368115   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 05:47:58.368126   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 05:47:58.368137   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 05:47:58.368148   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 05:47:58.368163   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 05:47:58.368174   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 05:47:58.368220   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 05:47:58.368248   51953 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 05:47:58.368256   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:47:58.368286   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:47:58.368309   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:47:58.368331   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 05:47:58.368373   51953 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:47:58.368402   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.368414   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.368427   51953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.368978   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:47:58.388893   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:47:58.409416   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:47:58.428450   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:47:58.446489   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:47:58.465644   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:47:58.483264   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:47:58.500807   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:47:58.518107   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 05:47:58.536070   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 05:47:58.553632   51953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:47:58.571692   51953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:47:58.584898   51953 ssh_runner.go:195] Run: openssl version
	I1210 05:47:58.590608   51953 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 05:47:58.591139   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.599076   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 05:47:58.606632   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610200   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610255   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.610308   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 05:47:58.650574   51953 command_runner.go:130] > 51391683
	I1210 05:47:58.651004   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:47:58.658249   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.665388   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 05:47:58.672651   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676295   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676329   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.676381   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 05:47:58.716661   51953 command_runner.go:130] > 3ec20f2e
	I1210 05:47:58.717156   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:47:58.724496   51953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.731755   51953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:47:58.739224   51953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742739   51953 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742773   51953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.742827   51953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:47:58.783109   51953 command_runner.go:130] > b5213941
	I1210 05:47:58.783531   51953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:47:58.790793   51953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:47:58.794232   51953 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:47:58.794258   51953 command_runner.go:130] >   Size: 1172      	Blocks: 8          IO Block: 4096   regular file
	I1210 05:47:58.794265   51953 command_runner.go:130] > Device: 259,1	Inode: 1307887     Links: 1
	I1210 05:47:58.794272   51953 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:47:58.794286   51953 command_runner.go:130] > Access: 2025-12-10 05:43:51.022657545 +0000
	I1210 05:47:58.794292   51953 command_runner.go:130] > Modify: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794297   51953 command_runner.go:130] > Change: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794305   51953 command_runner.go:130] >  Birth: 2025-12-10 05:39:46.061180084 +0000
	I1210 05:47:58.794558   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:47:58.837377   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.837465   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:47:58.877636   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.878121   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:47:58.918797   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.919235   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:47:58.959487   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:58.960010   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:47:59.003251   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:59.003763   51953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:47:59.044279   51953 command_runner.go:130] > Certificate will not expire
	I1210 05:47:59.044747   51953 kubeadm.go:401] StartCluster: {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:47:59.044823   51953 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 05:47:59.044887   51953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:47:59.069970   51953 cri.go:89] found id: ""
	I1210 05:47:59.070038   51953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:47:59.076652   51953 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 05:47:59.076673   51953 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 05:47:59.076679   51953 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 05:47:59.077535   51953 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:47:59.077555   51953 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:47:59.077617   51953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:47:59.084671   51953 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:47:59.085448   51953 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-644034" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.085850   51953 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "functional-644034" cluster setting kubeconfig missing "functional-644034" context setting]
	I1210 05:47:59.086310   51953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.087190   51953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.087371   51953 kapi.go:59] client config for functional-644034: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key", CAFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:47:59.088034   51953 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 05:47:59.088055   51953 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 05:47:59.088068   51953 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 05:47:59.088074   51953 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 05:47:59.088078   51953 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 05:47:59.088429   51953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:47:59.089407   51953 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:47:59.096980   51953 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 05:47:59.097014   51953 kubeadm.go:602] duration metric: took 19.453757ms to restartPrimaryControlPlane
	I1210 05:47:59.097024   51953 kubeadm.go:403] duration metric: took 52.281886ms to StartCluster
	I1210 05:47:59.097064   51953 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.097152   51953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.097734   51953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:47:59.097941   51953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 05:47:59.098267   51953 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:47:59.098318   51953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 05:47:59.098380   51953 addons.go:70] Setting storage-provisioner=true in profile "functional-644034"
	I1210 05:47:59.098393   51953 addons.go:239] Setting addon storage-provisioner=true in "functional-644034"
	I1210 05:47:59.098419   51953 host.go:66] Checking if "functional-644034" exists ...
	I1210 05:47:59.098907   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.101905   51953 out.go:179] * Verifying Kubernetes components...
	I1210 05:47:59.106662   51953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:47:59.109785   51953 addons.go:70] Setting default-storageclass=true in profile "functional-644034"
	I1210 05:47:59.109823   51953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-644034"
	I1210 05:47:59.110155   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.137186   51953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:47:59.140065   51953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:47:59.140094   51953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:47:59.140172   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:59.152137   51953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:47:59.152308   51953 kapi.go:59] client config for functional-644034: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key", CAFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:47:59.152605   51953 addons.go:239] Setting addon default-storageclass=true in "functional-644034"
	I1210 05:47:59.152636   51953 host.go:66] Checking if "functional-644034" exists ...
	I1210 05:47:59.153047   51953 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:47:59.173160   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:59.202277   51953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:47:59.202307   51953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:47:59.202368   51953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:47:59.232670   51953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:47:59.321380   51953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:47:59.337472   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:47:59.374986   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:00.169551   51953 node_ready.go:35] waiting up to 6m0s for node "functional-644034" to be "Ready" ...
	I1210 05:48:00.169689   51953 type.go:168] "Request Body" body=""
	I1210 05:48:00.169752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:00.170008   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.170051   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170077   51953 retry.go:31] will retry after 139.03743ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170121   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.170135   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170145   51953 retry.go:31] will retry after 348.331986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.170219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:00.310507   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:00.415931   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.416069   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.416135   51953 retry.go:31] will retry after 233.204425ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.519312   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:00.585157   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.585240   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.585274   51953 retry.go:31] will retry after 499.606359ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.650447   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:00.669993   51953 type.go:168] "Request Body" body=""
	I1210 05:48:00.670098   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:00.670433   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:00.712181   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:00.715417   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:00.715449   51953 retry.go:31] will retry after 781.025556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.086035   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:01.148055   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.148095   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.148115   51953 retry.go:31] will retry after 644.355236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.170281   51953 type.go:168] "Request Body" body=""
	I1210 05:48:01.170372   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:01.170734   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:01.497246   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:01.552133   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.555247   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.555278   51953 retry.go:31] will retry after 1.200680207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.670555   51953 type.go:168] "Request Body" body=""
	I1210 05:48:01.670646   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:01.670959   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:01.793341   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:01.851452   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:01.854727   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:01.854768   51953 retry.go:31] will retry after 727.381606ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.170188   51953 type.go:168] "Request Body" body=""
	I1210 05:48:02.170290   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:02.170618   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:02.170696   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:02.583237   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:02.649935   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:02.649981   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.650022   51953 retry.go:31] will retry after 1.310515996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.670155   51953 type.go:168] "Request Body" body=""
	I1210 05:48:02.670292   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:02.670651   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:02.757075   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:02.818837   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:02.821796   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:02.821831   51953 retry.go:31] will retry after 1.687874073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:03.170317   51953 type.go:168] "Request Body" body=""
	I1210 05:48:03.170406   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:03.170707   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:03.670505   51953 type.go:168] "Request Body" body=""
	I1210 05:48:03.670583   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:03.670925   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:03.961404   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:04.024244   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:04.024282   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.024323   51953 retry.go:31] will retry after 1.628415395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.170524   51953 type.go:168] "Request Body" body=""
	I1210 05:48:04.170651   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:04.171067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:04.171129   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:04.510724   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:04.566617   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:04.570030   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.570064   51953 retry.go:31] will retry after 2.695563296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:04.670310   51953 type.go:168] "Request Body" body=""
	I1210 05:48:04.670389   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:04.670711   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.170563   51953 type.go:168] "Request Body" body=""
	I1210 05:48:05.170635   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:05.170967   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.653658   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:05.670351   51953 type.go:168] "Request Body" body=""
	I1210 05:48:05.670461   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:05.670799   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:05.744168   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:05.744207   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:05.744248   51953 retry.go:31] will retry after 1.470532715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:06.169848   51953 type.go:168] "Request Body" body=""
	I1210 05:48:06.169975   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:06.170317   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:06.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:48:06.669921   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:06.670264   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:06.670329   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:07.169978   51953 type.go:168] "Request Body" body=""
	I1210 05:48:07.170058   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:07.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:07.215626   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:07.266052   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:07.280336   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:07.280370   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.280387   51953 retry.go:31] will retry after 5.58106306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.333195   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:07.333236   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.333256   51953 retry.go:31] will retry after 2.610344026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:07.670753   51953 type.go:168] "Request Body" body=""
	I1210 05:48:07.670832   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:07.671195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:08.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:48:08.169912   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:08.170281   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:08.669773   51953 type.go:168] "Request Body" body=""
	I1210 05:48:08.669870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:08.670131   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:09.170132   51953 type.go:168] "Request Body" body=""
	I1210 05:48:09.170205   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:09.170536   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:09.170594   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:09.670237   51953 type.go:168] "Request Body" body=""
	I1210 05:48:09.670311   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:09.670667   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:09.944159   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:10.010561   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:10.010619   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:10.010642   51953 retry.go:31] will retry after 2.5620788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:10.169787   51953 type.go:168] "Request Body" body=""
	I1210 05:48:10.169854   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:10.170167   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:10.669895   51953 type.go:168] "Request Body" body=""
	I1210 05:48:10.669974   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:10.670363   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:11.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:48:11.169913   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:11.170234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:11.669750   51953 type.go:168] "Request Body" body=""
	I1210 05:48:11.669833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:11.670159   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:11.670233   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:12.169956   51953 type.go:168] "Request Body" body=""
	I1210 05:48:12.170030   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:12.170375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:12.572886   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:12.631295   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:12.634400   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.634432   51953 retry.go:31] will retry after 5.90622422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.670736   51953 type.go:168] "Request Body" body=""
	I1210 05:48:12.670808   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:12.671172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:12.862533   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:12.918893   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:12.918929   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:12.918949   51953 retry.go:31] will retry after 8.272023324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:13.170464   51953 type.go:168] "Request Body" body=""
	I1210 05:48:13.170532   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:13.170809   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:13.670589   51953 type.go:168] "Request Body" body=""
	I1210 05:48:13.670665   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:13.670979   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:13.671051   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:14.170623   51953 type.go:168] "Request Body" body=""
	I1210 05:48:14.170704   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:14.171052   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:14.669975   51953 type.go:168] "Request Body" body=""
	I1210 05:48:14.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:14.670351   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:15.170046   51953 type.go:168] "Request Body" body=""
	I1210 05:48:15.170119   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:15.170417   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:15.670099   51953 type.go:168] "Request Body" body=""
	I1210 05:48:15.670181   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:15.670515   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:16.169772   51953 type.go:168] "Request Body" body=""
	I1210 05:48:16.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:16.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:16.170210   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:16.669859   51953 type.go:168] "Request Body" body=""
	I1210 05:48:16.669945   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:16.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:17.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:48:17.169889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:17.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:17.669877   51953 type.go:168] "Request Body" body=""
	I1210 05:48:17.669969   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:17.670225   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:18.169971   51953 type.go:168] "Request Body" body=""
	I1210 05:48:18.170045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:18.170383   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:18.170445   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:18.540818   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:18.598871   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:18.601811   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:18.601841   51953 retry.go:31] will retry after 12.747843498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:18.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:18.670272   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:18.670582   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:19.170370   51953 type.go:168] "Request Body" body=""
	I1210 05:48:19.170452   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:19.170779   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:19.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:48:19.670779   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:19.671114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:20.169841   51953 type.go:168] "Request Body" body=""
	I1210 05:48:20.169920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:20.170286   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:20.669765   51953 type.go:168] "Request Body" body=""
	I1210 05:48:20.669841   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:20.670151   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:20.670209   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:21.169914   51953 type.go:168] "Request Body" body=""
	I1210 05:48:21.169987   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:21.170308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:21.191680   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:21.254244   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:21.254291   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:21.254309   51953 retry.go:31] will retry after 13.504528238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:21.669784   51953 type.go:168] "Request Body" body=""
	I1210 05:48:21.669884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:21.670234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:22.169864   51953 type.go:168] "Request Body" body=""
	I1210 05:48:22.169979   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:22.170274   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:22.670052   51953 type.go:168] "Request Body" body=""
	I1210 05:48:22.670132   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:22.670457   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:22.670511   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:23.170156   51953 type.go:168] "Request Body" body=""
	I1210 05:48:23.170275   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:23.170563   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:23.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:48:23.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:23.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:24.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:48:24.169911   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:24.170218   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:24.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:24.670237   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:24.670543   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:24.670597   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:25.170342   51953 type.go:168] "Request Body" body=""
	I1210 05:48:25.170412   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:25.170680   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:25.670543   51953 type.go:168] "Request Body" body=""
	I1210 05:48:25.670623   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:25.670919   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:26.170671   51953 type.go:168] "Request Body" body=""
	I1210 05:48:26.170749   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:26.171110   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:26.669682   51953 type.go:168] "Request Body" body=""
	I1210 05:48:26.669752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:26.670007   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:27.170402   51953 type.go:168] "Request Body" body=""
	I1210 05:48:27.170479   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:27.170798   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:27.170859   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:27.670357   51953 type.go:168] "Request Body" body=""
	I1210 05:48:27.670437   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:27.670773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:28.170551   51953 type.go:168] "Request Body" body=""
	I1210 05:48:28.170643   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:28.170896   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:28.670265   51953 type.go:168] "Request Body" body=""
	I1210 05:48:28.670338   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:28.670758   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:29.170472   51953 type.go:168] "Request Body" body=""
	I1210 05:48:29.170542   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:29.170877   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:29.170933   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:29.669736   51953 type.go:168] "Request Body" body=""
	I1210 05:48:29.669810   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:29.670135   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:30.169864   51953 type.go:168] "Request Body" body=""
	I1210 05:48:30.169940   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:30.170305   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:30.669879   51953 type.go:168] "Request Body" body=""
	I1210 05:48:30.669957   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:30.670279   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:31.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:48:31.169834   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:31.170092   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:31.350447   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:31.407735   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:31.410898   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:31.410931   51953 retry.go:31] will retry after 18.518112559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:31.670455   51953 type.go:168] "Request Body" body=""
	I1210 05:48:31.670542   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:31.670952   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:31.671005   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:32.170764   51953 type.go:168] "Request Body" body=""
	I1210 05:48:32.170837   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:32.171167   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:32.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:48:32.669900   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:32.670158   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:33.169858   51953 type.go:168] "Request Body" body=""
	I1210 05:48:33.169936   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:33.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:33.669974   51953 type.go:168] "Request Body" body=""
	I1210 05:48:33.670051   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:33.670366   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:34.170663   51953 type.go:168] "Request Body" body=""
	I1210 05:48:34.170730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:34.171001   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:34.171083   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:34.670065   51953 type.go:168] "Request Body" body=""
	I1210 05:48:34.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:34.670459   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:34.759888   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:34.813991   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:34.817148   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:34.817180   51953 retry.go:31] will retry after 7.858877757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:35.170714   51953 type.go:168] "Request Body" body=""
	I1210 05:48:35.170783   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:35.171144   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:35.669794   51953 type.go:168] "Request Body" body=""
	I1210 05:48:35.669884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:35.670145   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:36.169862   51953 type.go:168] "Request Body" body=""
	I1210 05:48:36.169932   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:36.170264   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:36.669949   51953 type.go:168] "Request Body" body=""
	I1210 05:48:36.670019   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:36.670336   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:36.670392   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:37.170023   51953 type.go:168] "Request Body" body=""
	I1210 05:48:37.170089   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:37.170351   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:37.670112   51953 type.go:168] "Request Body" body=""
	I1210 05:48:37.670187   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:37.670504   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:38.170212   51953 type.go:168] "Request Body" body=""
	I1210 05:48:38.170304   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:38.170601   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:38.670326   51953 type.go:168] "Request Body" body=""
	I1210 05:48:38.670390   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:38.670677   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:38.670718   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:39.170413   51953 type.go:168] "Request Body" body=""
	I1210 05:48:39.170487   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:39.170808   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:39.669722   51953 type.go:168] "Request Body" body=""
	I1210 05:48:39.669794   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:39.670121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:40.169742   51953 type.go:168] "Request Body" body=""
	I1210 05:48:40.169816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:40.170090   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:40.669786   51953 type.go:168] "Request Body" body=""
	I1210 05:48:40.669863   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:40.670230   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:41.169931   51953 type.go:168] "Request Body" body=""
	I1210 05:48:41.170003   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:41.170334   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:41.170388   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:41.670036   51953 type.go:168] "Request Body" body=""
	I1210 05:48:41.670109   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:41.670415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.170132   51953 type.go:168] "Request Body" body=""
	I1210 05:48:42.170213   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:42.170632   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.670451   51953 type.go:168] "Request Body" body=""
	I1210 05:48:42.670533   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:42.670872   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:42.677131   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:48:42.736218   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:42.736261   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:42.736279   51953 retry.go:31] will retry after 23.425189001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:43.170386   51953 type.go:168] "Request Body" body=""
	I1210 05:48:43.170464   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:43.170737   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:43.170779   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:43.670538   51953 type.go:168] "Request Body" body=""
	I1210 05:48:43.670609   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:43.670906   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:44.170640   51953 type.go:168] "Request Body" body=""
	I1210 05:48:44.170719   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:44.171057   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:44.669915   51953 type.go:168] "Request Body" body=""
	I1210 05:48:44.669983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:44.670265   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:45.170036   51953 type.go:168] "Request Body" body=""
	I1210 05:48:45.170123   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:45.175201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1210 05:48:45.175287   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:45.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:48:45.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:45.670195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:46.170498   51953 type.go:168] "Request Body" body=""
	I1210 05:48:46.170576   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:46.170876   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:46.670607   51953 type.go:168] "Request Body" body=""
	I1210 05:48:46.670701   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:46.671031   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:47.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:48:47.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:47.170154   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:47.669733   51953 type.go:168] "Request Body" body=""
	I1210 05:48:47.669806   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:47.670071   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:47.670117   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:48.169793   51953 type.go:168] "Request Body" body=""
	I1210 05:48:48.169879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:48.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:48.669835   51953 type.go:168] "Request Body" body=""
	I1210 05:48:48.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:48.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:49.170055   51953 type.go:168] "Request Body" body=""
	I1210 05:48:49.170124   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:49.170378   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:49.670161   51953 type.go:168] "Request Body" body=""
	I1210 05:48:49.670235   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:49.670525   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:49.670571   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:49.930022   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:48:49.989791   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:48:49.993079   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:49.993114   51953 retry.go:31] will retry after 23.38662002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:48:50.170615   51953 type.go:168] "Request Body" body=""
	I1210 05:48:50.170692   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:50.171002   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:50.669688   51953 type.go:168] "Request Body" body=""
	I1210 05:48:50.669757   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:50.670060   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:51.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:48:51.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:51.170229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:51.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:48:51.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:51.670261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:52.169845   51953 type.go:168] "Request Body" body=""
	I1210 05:48:52.169924   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:52.170187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:52.170237   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:52.669804   51953 type.go:168] "Request Body" body=""
	I1210 05:48:52.669873   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:52.670187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:53.169870   51953 type.go:168] "Request Body" body=""
	I1210 05:48:53.169941   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:53.170273   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:53.669765   51953 type.go:168] "Request Body" body=""
	I1210 05:48:53.669863   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:53.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:54.169803   51953 type.go:168] "Request Body" body=""
	I1210 05:48:54.169877   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:54.170216   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:54.170270   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:54.670065   51953 type.go:168] "Request Body" body=""
	I1210 05:48:54.670136   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:54.670470   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:55.169793   51953 type.go:168] "Request Body" body=""
	I1210 05:48:55.169876   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:55.170142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:55.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:48:55.669919   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:55.670247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:56.169832   51953 type.go:168] "Request Body" body=""
	I1210 05:48:56.169907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:56.170229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:56.669896   51953 type.go:168] "Request Body" body=""
	I1210 05:48:56.669967   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:56.670287   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:56.670338   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:57.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:48:57.169898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:57.170223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:57.669803   51953 type.go:168] "Request Body" body=""
	I1210 05:48:57.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:57.670238   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:58.169908   51953 type.go:168] "Request Body" body=""
	I1210 05:48:58.169985   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:58.170322   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:58.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:48:58.670098   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:58.670445   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:48:58.670497   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:48:59.170301   51953 type.go:168] "Request Body" body=""
	I1210 05:48:59.170378   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:59.170749   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:48:59.670557   51953 type.go:168] "Request Body" body=""
	I1210 05:48:59.670633   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:48:59.670949   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:00.169725   51953 type.go:168] "Request Body" body=""
	I1210 05:49:00.169813   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:00.170141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:00.670083   51953 type.go:168] "Request Body" body=""
	I1210 05:49:00.670159   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:00.670486   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:00.670533   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:01.169951   51953 type.go:168] "Request Body" body=""
	I1210 05:49:01.170038   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:01.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:01.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:01.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:01.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:02.169846   51953 type.go:168] "Request Body" body=""
	I1210 05:49:02.169918   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:02.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:02.669747   51953 type.go:168] "Request Body" body=""
	I1210 05:49:02.669829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:02.670143   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:03.169862   51953 type.go:168] "Request Body" body=""
	I1210 05:49:03.169937   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:03.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:03.170307   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:03.669983   51953 type.go:168] "Request Body" body=""
	I1210 05:49:03.670055   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:03.670401   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:04.169978   51953 type.go:168] "Request Body" body=""
	I1210 05:49:04.170070   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:04.170429   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:04.670184   51953 type.go:168] "Request Body" body=""
	I1210 05:49:04.670254   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:04.670541   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:05.169853   51953 type.go:168] "Request Body" body=""
	I1210 05:49:05.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:05.170261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:05.669805   51953 type.go:168] "Request Body" body=""
	I1210 05:49:05.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:05.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:05.670209   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:06.161707   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:49:06.170118   51953 type.go:168] "Request Body" body=""
	I1210 05:49:06.170187   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:06.170454   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:06.215983   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:06.219418   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:06.219449   51953 retry.go:31] will retry after 38.750779649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:06.669785   51953 type.go:168] "Request Body" body=""
	I1210 05:49:06.669865   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:06.670186   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:07.169875   51953 type.go:168] "Request Body" body=""
	I1210 05:49:07.169941   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:07.170192   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:07.669935   51953 type.go:168] "Request Body" body=""
	I1210 05:49:07.670005   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:07.670350   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:07.670403   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:08.170063   51953 type.go:168] "Request Body" body=""
	I1210 05:49:08.170142   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:08.170510   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:08.670188   51953 type.go:168] "Request Body" body=""
	I1210 05:49:08.670268   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:08.670583   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:09.170358   51953 type.go:168] "Request Body" body=""
	I1210 05:49:09.170435   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:09.170718   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:09.670114   51953 type.go:168] "Request Body" body=""
	I1210 05:49:09.670186   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:09.670501   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:09.670595   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:10.170242   51953 type.go:168] "Request Body" body=""
	I1210 05:49:10.170308   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:10.170650   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:10.670454   51953 type.go:168] "Request Body" body=""
	I1210 05:49:10.670533   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:10.670873   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:11.170681   51953 type.go:168] "Request Body" body=""
	I1210 05:49:11.170756   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:11.171117   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:11.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:49:11.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:11.670185   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:12.169855   51953 type.go:168] "Request Body" body=""
	I1210 05:49:12.169925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:12.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:12.170304   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:12.669856   51953 type.go:168] "Request Body" body=""
	I1210 05:49:12.669928   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:12.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:13.169860   51953 type.go:168] "Request Body" body=""
	I1210 05:49:13.169943   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:13.170217   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:13.380712   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:49:13.443508   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:13.443549   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:13.443568   51953 retry.go:31] will retry after 17.108062036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:49:13.669825   51953 type.go:168] "Request Body" body=""
	I1210 05:49:13.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:13.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:14.169973   51953 type.go:168] "Request Body" body=""
	I1210 05:49:14.170046   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:14.170360   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:14.170413   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:14.670243   51953 type.go:168] "Request Body" body=""
	I1210 05:49:14.670320   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:14.670588   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:15.170418   51953 type.go:168] "Request Body" body=""
	I1210 05:49:15.170487   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:15.170795   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:15.670586   51953 type.go:168] "Request Body" body=""
	I1210 05:49:15.670658   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:15.670975   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:16.170704   51953 type.go:168] "Request Body" body=""
	I1210 05:49:16.170776   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:16.171067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:16.171120   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:16.669813   51953 type.go:168] "Request Body" body=""
	I1210 05:49:16.669905   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:16.670255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:17.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:49:17.169899   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:17.170218   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:17.669769   51953 type.go:168] "Request Body" body=""
	I1210 05:49:17.669844   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:17.670143   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:18.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:49:18.169934   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:18.170269   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:18.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:49:18.670094   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:18.670415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:18.670472   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:19.170323   51953 type.go:168] "Request Body" body=""
	I1210 05:49:19.170395   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:19.170661   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:19.670601   51953 type.go:168] "Request Body" body=""
	I1210 05:49:19.670672   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:19.670993   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:20.169740   51953 type.go:168] "Request Body" body=""
	I1210 05:49:20.169817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:20.170150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:20.670516   51953 type.go:168] "Request Body" body=""
	I1210 05:49:20.670584   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:20.670897   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:20.670954   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:21.170713   51953 type.go:168] "Request Body" body=""
	I1210 05:49:21.170783   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:21.171082   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:21.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:49:21.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:21.670172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:22.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:49:22.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:22.170106   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:22.669822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:22.669894   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:22.670229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:23.169802   51953 type.go:168] "Request Body" body=""
	I1210 05:49:23.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:23.170200   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:23.170257   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:23.669758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:23.669829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:23.670132   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:24.169822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:24.169902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:24.170262   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:24.670129   51953 type.go:168] "Request Body" body=""
	I1210 05:49:24.670207   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:24.670559   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:25.170449   51953 type.go:168] "Request Body" body=""
	I1210 05:49:25.170521   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:25.170831   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:25.170881   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:25.670585   51953 type.go:168] "Request Body" body=""
	I1210 05:49:25.670666   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:25.671038   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:26.170684   51953 type.go:168] "Request Body" body=""
	I1210 05:49:26.170760   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:26.171104   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:26.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:49:26.669864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:26.670150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:27.169852   51953 type.go:168] "Request Body" body=""
	I1210 05:49:27.169930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:27.170272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:27.669984   51953 type.go:168] "Request Body" body=""
	I1210 05:49:27.670061   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:27.670384   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:27.670440   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:28.169751   51953 type.go:168] "Request Body" body=""
	I1210 05:49:28.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:28.170155   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:28.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:49:28.669874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:28.670210   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:29.170062   51953 type.go:168] "Request Body" body=""
	I1210 05:49:29.170136   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:29.170491   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:29.670199   51953 type.go:168] "Request Body" body=""
	I1210 05:49:29.670274   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:29.670550   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:29.670593   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:30.170374   51953 type.go:168] "Request Body" body=""
	I1210 05:49:30.170446   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:30.170838   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:30.552353   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:49:30.608474   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:30.608517   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:30.608604   51953 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:49:30.670690   51953 type.go:168] "Request Body" body=""
	I1210 05:49:30.670767   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:30.671090   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:31.169783   51953 type.go:168] "Request Body" body=""
	I1210 05:49:31.169860   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:31.170226   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:31.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:31.669889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:31.670241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:32.169940   51953 type.go:168] "Request Body" body=""
	I1210 05:49:32.170013   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:32.170338   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:32.170396   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:32.670045   51953 type.go:168] "Request Body" body=""
	I1210 05:49:32.670119   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:32.670396   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:33.169834   51953 type.go:168] "Request Body" body=""
	I1210 05:49:33.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:33.170309   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:33.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:49:33.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:33.670201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:34.169903   51953 type.go:168] "Request Body" body=""
	I1210 05:49:34.169973   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:34.170269   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:34.670193   51953 type.go:168] "Request Body" body=""
	I1210 05:49:34.670266   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:34.670601   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:34.670655   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:35.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:49:35.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:35.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:35.669756   51953 type.go:168] "Request Body" body=""
	I1210 05:49:35.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:35.670131   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:36.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:49:36.169901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:36.170223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:36.669946   51953 type.go:168] "Request Body" body=""
	I1210 05:49:36.670020   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:36.670367   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:37.170034   51953 type.go:168] "Request Body" body=""
	I1210 05:49:37.170108   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:37.170407   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:37.170461   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:37.669822   51953 type.go:168] "Request Body" body=""
	I1210 05:49:37.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:37.670249   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:38.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:49:38.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:38.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:38.669935   51953 type.go:168] "Request Body" body=""
	I1210 05:49:38.670003   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:38.670313   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:39.170298   51953 type.go:168] "Request Body" body=""
	I1210 05:49:39.170373   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:39.170717   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:39.170771   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:39.670468   51953 type.go:168] "Request Body" body=""
	I1210 05:49:39.670545   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:39.670883   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:40.170669   51953 type.go:168] "Request Body" body=""
	I1210 05:49:40.170737   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:40.171069   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:40.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:49:40.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:40.670211   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:41.169813   51953 type.go:168] "Request Body" body=""
	I1210 05:49:41.169884   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:41.170198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:41.669764   51953 type.go:168] "Request Body" body=""
	I1210 05:49:41.669859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:41.670152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:41.670193   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:42.169845   51953 type.go:168] "Request Body" body=""
	I1210 05:49:42.169948   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:42.170319   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:42.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:49:42.669885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:42.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:43.169758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:43.169831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:43.170096   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:43.669848   51953 type.go:168] "Request Body" body=""
	I1210 05:49:43.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:43.670267   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:43.670317   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:44.169816   51953 type.go:168] "Request Body" body=""
	I1210 05:49:44.169893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:44.170187   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:44.670057   51953 type.go:168] "Request Body" body=""
	I1210 05:49:44.670140   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:44.670470   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:44.970959   51953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:49:45.060109   51953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:45.064226   51953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:49:45.064337   51953 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:49:45.067552   51953 out.go:179] * Enabled addons: 
	I1210 05:49:45.070225   51953 addons.go:530] duration metric: took 1m45.971891823s for enable addons: enabled=[]
	I1210 05:49:45.169999   51953 type.go:168] "Request Body" body=""
	I1210 05:49:45.170150   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:45.171110   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:45.669844   51953 type.go:168] "Request Body" body=""
	I1210 05:49:45.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:45.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:46.169931   51953 type.go:168] "Request Body" body=""
	I1210 05:49:46.170025   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:46.170316   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:46.170369   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:46.670055   51953 type.go:168] "Request Body" body=""
	I1210 05:49:46.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:46.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:47.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:49:47.169900   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:47.170277   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:47.669758   51953 type.go:168] "Request Body" body=""
	I1210 05:49:47.669844   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:47.670170   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:48.169861   51953 type.go:168] "Request Body" body=""
	I1210 05:49:48.169933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:48.170293   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:48.669823   51953 type.go:168] "Request Body" body=""
	I1210 05:49:48.669895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:48.670185   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:48.670239   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:49.170189   51953 type.go:168] "Request Body" body=""
	I1210 05:49:49.170282   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:49.170581   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:49.670519   51953 type.go:168] "Request Body" body=""
	I1210 05:49:49.670591   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:49.670933   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:50.170751   51953 type.go:168] "Request Body" body=""
	I1210 05:49:50.170838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:50.171163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:50.669768   51953 type.go:168] "Request Body" body=""
	I1210 05:49:50.669842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:50.670163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:51.169874   51953 type.go:168] "Request Body" body=""
	I1210 05:49:51.169945   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:51.170294   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:51.170350   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:51.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:49:51.669925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:51.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:52.169785   51953 type.go:168] "Request Body" body=""
	I1210 05:49:52.169868   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:52.170166   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:52.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:49:52.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:52.670278   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:53.170002   51953 type.go:168] "Request Body" body=""
	I1210 05:49:53.170083   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:53.170428   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:53.170482   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:53.670134   51953 type.go:168] "Request Body" body=""
	I1210 05:49:53.670209   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:53.670537   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:54.170330   51953 type.go:168] "Request Body" body=""
	I1210 05:49:54.170403   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:54.170997   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:54.669762   51953 type.go:168] "Request Body" body=""
	I1210 05:49:54.669840   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:54.670157   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:55.170437   51953 type.go:168] "Request Body" body=""
	I1210 05:49:55.170508   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:55.170825   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:55.170879   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:55.670656   51953 type.go:168] "Request Body" body=""
	I1210 05:49:55.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:55.671067   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:56.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:49:56.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:56.170163   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:56.670631   51953 type.go:168] "Request Body" body=""
	I1210 05:49:56.670708   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:56.671047   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:57.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:49:57.169826   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:57.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:57.669853   51953 type.go:168] "Request Body" body=""
	I1210 05:49:57.669923   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:57.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:57.670309   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:49:58.169747   51953 type.go:168] "Request Body" body=""
	I1210 05:49:58.169817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:58.170122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:58.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:49:58.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:58.670275   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:59.170063   51953 type.go:168] "Request Body" body=""
	I1210 05:49:59.170156   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:59.170502   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:49:59.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:49:59.670792   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:49:59.671123   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:49:59.671171   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:00.169945   51953 type.go:168] "Request Body" body=""
	I1210 05:50:00.170054   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:00.170391   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:00.670293   51953 type.go:168] "Request Body" body=""
	I1210 05:50:00.670372   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:00.670734   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:01.170379   51953 type.go:168] "Request Body" body=""
	I1210 05:50:01.170445   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:01.170785   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:01.670657   51953 type.go:168] "Request Body" body=""
	I1210 05:50:01.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:01.671101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:02.169838   51953 type.go:168] "Request Body" body=""
	I1210 05:50:02.169916   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:02.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:02.170292   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:02.670641   51953 type.go:168] "Request Body" body=""
	I1210 05:50:02.670714   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:02.671049   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:03.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:50:03.169821   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:03.170173   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:03.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:03.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:03.670257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:04.169808   51953 type.go:168] "Request Body" body=""
	I1210 05:50:04.169878   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:04.170170   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:04.670153   51953 type.go:168] "Request Body" body=""
	I1210 05:50:04.670227   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:04.670558   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:04.670612   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:05.170389   51953 type.go:168] "Request Body" body=""
	I1210 05:50:05.170463   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:05.170790   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:05.670350   51953 type.go:168] "Request Body" body=""
	I1210 05:50:05.670419   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:05.670674   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:06.170479   51953 type.go:168] "Request Body" body=""
	I1210 05:50:06.170562   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:06.170930   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:06.670726   51953 type.go:168] "Request Body" body=""
	I1210 05:50:06.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:06.671141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:06.671199   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:07.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:50:07.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:07.170225   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:07.669823   51953 type.go:168] "Request Body" body=""
	I1210 05:50:07.669897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:07.670237   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:08.169822   51953 type.go:168] "Request Body" body=""
	I1210 05:50:08.169902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:08.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:08.669915   51953 type.go:168] "Request Body" body=""
	I1210 05:50:08.669997   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:08.670259   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:09.170295   51953 type.go:168] "Request Body" body=""
	I1210 05:50:09.170366   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:09.170686   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:09.170740   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:09.670199   51953 type.go:168] "Request Body" body=""
	I1210 05:50:09.670275   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:09.670611   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:10.170386   51953 type.go:168] "Request Body" body=""
	I1210 05:50:10.170464   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:10.170732   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:10.670493   51953 type.go:168] "Request Body" body=""
	I1210 05:50:10.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:10.670908   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:11.170688   51953 type.go:168] "Request Body" body=""
	I1210 05:50:11.170762   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:11.171109   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:11.171166   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:11.669753   51953 type.go:168] "Request Body" body=""
	I1210 05:50:11.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:11.670111   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:12.169788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:12.169865   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:12.170193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:12.669828   51953 type.go:168] "Request Body" body=""
	I1210 05:50:12.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:12.670272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:13.169792   51953 type.go:168] "Request Body" body=""
	I1210 05:50:13.169860   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:13.170133   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:13.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:50:13.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:13.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:13.670257   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:14.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:50:14.169893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:14.170243   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:14.670026   51953 type.go:168] "Request Body" body=""
	I1210 05:50:14.670100   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:14.670363   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:15.170050   51953 type.go:168] "Request Body" body=""
	I1210 05:50:15.170123   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:15.170471   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:15.670177   51953 type.go:168] "Request Body" body=""
	I1210 05:50:15.670250   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:15.670584   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:15.670636   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:16.170320   51953 type.go:168] "Request Body" body=""
	I1210 05:50:16.170389   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:16.170687   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:16.670498   51953 type.go:168] "Request Body" body=""
	I1210 05:50:16.670574   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:16.670936   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:17.170736   51953 type.go:168] "Request Body" body=""
	I1210 05:50:17.170817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:17.171164   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:17.670572   51953 type.go:168] "Request Body" body=""
	I1210 05:50:17.670637   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:17.670949   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:17.671005   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:18.169725   51953 type.go:168] "Request Body" body=""
	I1210 05:50:18.169795   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:18.170134   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:18.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:50:18.669953   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:18.670308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:19.170037   51953 type.go:168] "Request Body" body=""
	I1210 05:50:19.170104   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:19.170365   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:19.670201   51953 type.go:168] "Request Body" body=""
	I1210 05:50:19.670277   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:19.670610   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:20.170409   51953 type.go:168] "Request Body" body=""
	I1210 05:50:20.170484   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:20.170822   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:20.170877   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:20.670595   51953 type.go:168] "Request Body" body=""
	I1210 05:50:20.670666   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:20.670993   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:21.169731   51953 type.go:168] "Request Body" body=""
	I1210 05:50:21.169813   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:21.170125   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:21.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:21.669898   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:21.670193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:22.170669   51953 type.go:168] "Request Body" body=""
	I1210 05:50:22.170747   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:22.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:22.171080   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:22.669728   51953 type.go:168] "Request Body" body=""
	I1210 05:50:22.669806   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:22.670127   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:23.169851   51953 type.go:168] "Request Body" body=""
	I1210 05:50:23.169933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:23.170302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:23.669972   51953 type.go:168] "Request Body" body=""
	I1210 05:50:23.670044   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:23.670358   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:24.170057   51953 type.go:168] "Request Body" body=""
	I1210 05:50:24.170129   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:24.170453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:24.670197   51953 type.go:168] "Request Body" body=""
	I1210 05:50:24.670272   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:24.670612   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:24.670670   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:25.170337   51953 type.go:168] "Request Body" body=""
	I1210 05:50:25.170410   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:25.170739   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:25.670502   51953 type.go:168] "Request Body" body=""
	I1210 05:50:25.670572   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:25.670902   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:26.170691   51953 type.go:168] "Request Body" body=""
	I1210 05:50:26.170764   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:26.171108   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:26.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:26.669867   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:26.670130   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:27.169817   51953 type.go:168] "Request Body" body=""
	I1210 05:50:27.169897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:27.170246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:27.170300   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:27.669967   51953 type.go:168] "Request Body" body=""
	I1210 05:50:27.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:27.670392   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:28.169739   51953 type.go:168] "Request Body" body=""
	I1210 05:50:28.169822   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:28.170150   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:28.669847   51953 type.go:168] "Request Body" body=""
	I1210 05:50:28.669930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:28.670221   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:29.170249   51953 type.go:168] "Request Body" body=""
	I1210 05:50:29.170325   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:29.170644   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:29.170699   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:29.670163   51953 type.go:168] "Request Body" body=""
	I1210 05:50:29.670232   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:29.670555   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:30.170356   51953 type.go:168] "Request Body" body=""
	I1210 05:50:30.170428   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:30.170751   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:30.670538   51953 type.go:168] "Request Body" body=""
	I1210 05:50:30.670611   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:30.670921   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:31.170427   51953 type.go:168] "Request Body" body=""
	I1210 05:50:31.170500   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:31.170752   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:31.170791   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:31.670569   51953 type.go:168] "Request Body" body=""
	I1210 05:50:31.670653   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:31.670969   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:32.169710   51953 type.go:168] "Request Body" body=""
	I1210 05:50:32.169785   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:32.170083   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:32.670743   51953 type.go:168] "Request Body" body=""
	I1210 05:50:32.670820   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:32.671121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:33.169805   51953 type.go:168] "Request Body" body=""
	I1210 05:50:33.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:33.170255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:33.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:50:33.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:33.670229   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:33.670285   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:34.169778   51953 type.go:168] "Request Body" body=""
	I1210 05:50:34.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:34.170189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:34.670104   51953 type.go:168] "Request Body" body=""
	I1210 05:50:34.670184   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:34.670511   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:35.169809   51953 type.go:168] "Request Body" body=""
	I1210 05:50:35.169891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:35.170193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:35.670544   51953 type.go:168] "Request Body" body=""
	I1210 05:50:35.670613   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:35.670878   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:35.670919   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:36.170712   51953 type.go:168] "Request Body" body=""
	I1210 05:50:36.170793   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:36.171084   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:36.669793   51953 type.go:168] "Request Body" body=""
	I1210 05:50:36.669873   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:36.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:37.169942   51953 type.go:168] "Request Body" body=""
	I1210 05:50:37.170016   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:37.170292   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:37.669831   51953 type.go:168] "Request Body" body=""
	I1210 05:50:37.669902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:37.670220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:38.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:50:38.169910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:38.170283   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:38.170338   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:38.669845   51953 type.go:168] "Request Body" body=""
	I1210 05:50:38.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:38.670182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:39.170144   51953 type.go:168] "Request Body" body=""
	I1210 05:50:39.170220   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:39.170549   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:39.670142   51953 type.go:168] "Request Body" body=""
	I1210 05:50:39.670218   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:39.670527   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:40.170193   51953 type.go:168] "Request Body" body=""
	I1210 05:50:40.170274   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:40.170539   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:40.170603   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:40.670363   51953 type.go:168] "Request Body" body=""
	I1210 05:50:40.670438   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:40.670794   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:41.170587   51953 type.go:168] "Request Body" body=""
	I1210 05:50:41.170671   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:41.171005   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:41.669712   51953 type.go:168] "Request Body" body=""
	I1210 05:50:41.669800   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:41.670128   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:42.169860   51953 type.go:168] "Request Body" body=""
	I1210 05:50:42.169951   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:42.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:42.669840   51953 type.go:168] "Request Body" body=""
	I1210 05:50:42.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:42.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:42.670293   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:43.169767   51953 type.go:168] "Request Body" body=""
	I1210 05:50:43.169833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:43.170101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:43.669850   51953 type.go:168] "Request Body" body=""
	I1210 05:50:43.669921   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:43.670246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:44.169977   51953 type.go:168] "Request Body" body=""
	I1210 05:50:44.170071   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:44.170414   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:44.670140   51953 type.go:168] "Request Body" body=""
	I1210 05:50:44.670226   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:44.670613   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:44.670677   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:45.170475   51953 type.go:168] "Request Body" body=""
	I1210 05:50:45.170563   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:45.170891   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:45.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:50:45.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:45.670222   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:46.169767   51953 type.go:168] "Request Body" body=""
	I1210 05:50:46.169838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:46.170104   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:46.669827   51953 type.go:168] "Request Body" body=""
	I1210 05:50:46.669903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:46.670226   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:47.169875   51953 type.go:168] "Request Body" body=""
	I1210 05:50:47.169958   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:47.170385   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:47.170442   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:47.670688   51953 type.go:168] "Request Body" body=""
	I1210 05:50:47.670757   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:47.671081   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:48.169796   51953 type.go:168] "Request Body" body=""
	I1210 05:50:48.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:48.170206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:48.669926   51953 type.go:168] "Request Body" body=""
	I1210 05:50:48.670000   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:48.670320   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:49.170308   51953 type.go:168] "Request Body" body=""
	I1210 05:50:49.170376   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:49.170645   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:49.170686   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:49.670650   51953 type.go:168] "Request Body" body=""
	I1210 05:50:49.670726   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:49.671070   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:50.170763   51953 type.go:168] "Request Body" body=""
	I1210 05:50:50.170849   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:50.171249   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:50.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:50:50.669858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:50.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:51.169892   51953 type.go:168] "Request Body" body=""
	I1210 05:50:51.169972   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:51.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:51.670028   51953 type.go:168] "Request Body" body=""
	I1210 05:50:51.670106   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:51.670436   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:51.670489   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:52.170129   51953 type.go:168] "Request Body" body=""
	I1210 05:50:52.170197   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:52.170458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:52.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:50:52.669892   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:52.670248   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:53.169982   51953 type.go:168] "Request Body" body=""
	I1210 05:50:53.170060   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:53.170464   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:53.669749   51953 type.go:168] "Request Body" body=""
	I1210 05:50:53.669818   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:53.670085   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:54.169781   51953 type.go:168] "Request Body" body=""
	I1210 05:50:54.169858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:54.170182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:54.170236   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:54.669996   51953 type.go:168] "Request Body" body=""
	I1210 05:50:54.670086   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:54.670418   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:55.170104   51953 type.go:168] "Request Body" body=""
	I1210 05:50:55.170177   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:55.170449   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:55.669843   51953 type.go:168] "Request Body" body=""
	I1210 05:50:55.669923   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:55.670268   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:56.169975   51953 type.go:168] "Request Body" body=""
	I1210 05:50:56.170051   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:56.170388   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:56.170441   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:56.670095   51953 type.go:168] "Request Body" body=""
	I1210 05:50:56.670168   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:56.670482   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:57.169890   51953 type.go:168] "Request Body" body=""
	I1210 05:50:57.169984   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:57.170314   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:57.669805   51953 type.go:168] "Request Body" body=""
	I1210 05:50:57.669879   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:57.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:58.169774   51953 type.go:168] "Request Body" body=""
	I1210 05:50:58.169842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:58.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:58.669884   51953 type.go:168] "Request Body" body=""
	I1210 05:50:58.669959   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:58.670302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:50:58.670355   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:50:59.170179   51953 type.go:168] "Request Body" body=""
	I1210 05:50:59.170260   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:59.170621   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:50:59.670326   51953 type.go:168] "Request Body" body=""
	I1210 05:50:59.670400   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:50:59.670713   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:00.170597   51953 type.go:168] "Request Body" body=""
	I1210 05:51:00.170676   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:00.171062   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:00.670393   51953 type.go:168] "Request Body" body=""
	I1210 05:51:00.670470   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:00.670779   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:00.670859   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:01.170104   51953 type.go:168] "Request Body" body=""
	I1210 05:51:01.170176   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:01.170534   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:01.670409   51953 type.go:168] "Request Body" body=""
	I1210 05:51:01.670489   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:01.670793   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:02.170600   51953 type.go:168] "Request Body" body=""
	I1210 05:51:02.170681   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:02.171034   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:02.669707   51953 type.go:168] "Request Body" body=""
	I1210 05:51:02.669777   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:02.670050   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:03.169821   51953 type.go:168] "Request Body" body=""
	I1210 05:51:03.169928   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:03.170262   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:03.170336   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:03.669876   51953 type.go:168] "Request Body" body=""
	I1210 05:51:03.669952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:03.670257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:04.169761   51953 type.go:168] "Request Body" body=""
	I1210 05:51:04.169839   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:04.170116   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:04.670079   51953 type.go:168] "Request Body" body=""
	I1210 05:51:04.670153   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:04.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:05.170179   51953 type.go:168] "Request Body" body=""
	I1210 05:51:05.170252   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:05.170541   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:05.170585   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:05.670315   51953 type.go:168] "Request Body" body=""
	I1210 05:51:05.670404   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:05.670663   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:06.170465   51953 type.go:168] "Request Body" body=""
	I1210 05:51:06.170568   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:06.170894   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:06.670708   51953 type.go:168] "Request Body" body=""
	I1210 05:51:06.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:06.671114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:07.169747   51953 type.go:168] "Request Body" body=""
	I1210 05:51:07.169823   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:07.170100   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:07.669844   51953 type.go:168] "Request Body" body=""
	I1210 05:51:07.669918   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:07.670193   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:07.670235   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:08.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:51:08.169926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:08.170261   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:08.669794   51953 type.go:168] "Request Body" body=""
	I1210 05:51:08.669861   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:08.670162   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:09.170071   51953 type.go:168] "Request Body" body=""
	I1210 05:51:09.170149   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:09.170495   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:09.670204   51953 type.go:168] "Request Body" body=""
	I1210 05:51:09.670276   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:09.670598   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:09.670653   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:10.169774   51953 type.go:168] "Request Body" body=""
	I1210 05:51:10.169872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:10.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:10.669859   51953 type.go:168] "Request Body" body=""
	I1210 05:51:10.669948   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:10.670307   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:11.169790   51953 type.go:168] "Request Body" body=""
	I1210 05:51:11.169861   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:11.170177   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:11.669711   51953 type.go:168] "Request Body" body=""
	I1210 05:51:11.669789   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:11.670102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:12.169688   51953 type.go:168] "Request Body" body=""
	I1210 05:51:12.169769   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:12.170083   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:12.170143   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:12.669821   51953 type.go:168] "Request Body" body=""
	I1210 05:51:12.669925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:12.670244   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:13.170587   51953 type.go:168] "Request Body" body=""
	I1210 05:51:13.170667   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:13.170931   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:13.670750   51953 type.go:168] "Request Body" body=""
	I1210 05:51:13.670830   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:13.671209   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:14.169828   51953 type.go:168] "Request Body" body=""
	I1210 05:51:14.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:14.170243   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:14.170295   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:14.670005   51953 type.go:168] "Request Body" body=""
	I1210 05:51:14.670076   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:14.670345   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:15.170019   51953 type.go:168] "Request Body" body=""
	I1210 05:51:15.170092   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:15.170405   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:15.670090   51953 type.go:168] "Request Body" body=""
	I1210 05:51:15.670164   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:15.670488   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:16.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:51:16.169830   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:16.170154   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:16.669884   51953 type.go:168] "Request Body" body=""
	I1210 05:51:16.669954   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:16.670321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:16.670378   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:17.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:51:17.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:17.170203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:17.669857   51953 type.go:168] "Request Body" body=""
	I1210 05:51:17.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:17.670182   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:18.169839   51953 type.go:168] "Request Body" body=""
	I1210 05:51:18.169915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:18.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:18.669814   51953 type.go:168] "Request Body" body=""
	I1210 05:51:18.669894   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:18.670212   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:19.170018   51953 type.go:168] "Request Body" body=""
	I1210 05:51:19.170103   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:19.170385   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:19.170435   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:19.670213   51953 type.go:168] "Request Body" body=""
	I1210 05:51:19.670290   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:19.670634   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:20.170418   51953 type.go:168] "Request Body" body=""
	I1210 05:51:20.170505   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:20.170868   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:20.670496   51953 type.go:168] "Request Body" body=""
	I1210 05:51:20.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:20.670919   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:21.170726   51953 type.go:168] "Request Body" body=""
	I1210 05:51:21.170805   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:21.171135   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:21.171197   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:21.669812   51953 type.go:168] "Request Body" body=""
	I1210 05:51:21.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:21.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:22.169804   51953 type.go:168] "Request Body" body=""
	I1210 05:51:22.169880   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:22.170180   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:22.669830   51953 type.go:168] "Request Body" body=""
	I1210 05:51:22.669902   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:22.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:23.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:51:23.169901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:23.170299   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:23.669871   51953 type.go:168] "Request Body" body=""
	I1210 05:51:23.669940   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:23.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:23.670238   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:24.169823   51953 type.go:168] "Request Body" body=""
	I1210 05:51:24.169903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:24.170239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:24.670061   51953 type.go:168] "Request Body" body=""
	I1210 05:51:24.670134   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:24.670493   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:25.169972   51953 type.go:168] "Request Body" body=""
	I1210 05:51:25.170044   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:25.170325   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:25.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:51:25.669907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:25.670245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:25.670298   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:26.170334   51953 type.go:168] "Request Body" body=""
	I1210 05:51:26.170405   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:26.170720   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:26.670508   51953 type.go:168] "Request Body" body=""
	I1210 05:51:26.670574   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:26.670837   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:27.170610   51953 type.go:168] "Request Body" body=""
	I1210 05:51:27.170687   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:27.171034   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:27.670641   51953 type.go:168] "Request Body" body=""
	I1210 05:51:27.670716   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:27.671052   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:27.671107   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:28.170507   51953 type.go:168] "Request Body" body=""
	I1210 05:51:28.170587   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:28.170917   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:28.670721   51953 type.go:168] "Request Body" body=""
	I1210 05:51:28.670801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:28.671160   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:29.170214   51953 type.go:168] "Request Body" body=""
	I1210 05:51:29.170299   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:29.170609   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:29.670390   51953 type.go:168] "Request Body" body=""
	I1210 05:51:29.670459   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:29.670758   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:30.170562   51953 type.go:168] "Request Body" body=""
	I1210 05:51:30.170660   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:30.171075   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:30.171137   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:30.670749   51953 type.go:168] "Request Body" body=""
	I1210 05:51:30.670827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:30.671196   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:31.169761   51953 type.go:168] "Request Body" body=""
	I1210 05:51:31.169835   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:31.170191   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:31.669883   51953 type.go:168] "Request Body" body=""
	I1210 05:51:31.669961   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:31.670300   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:32.169880   51953 type.go:168] "Request Body" body=""
	I1210 05:51:32.169952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:32.170273   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:32.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:51:32.669828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:32.670089   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:32.670131   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:33.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:51:33.169851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:33.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:33.669798   51953 type.go:168] "Request Body" body=""
	I1210 05:51:33.669868   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:33.670181   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:34.169753   51953 type.go:168] "Request Body" body=""
	I1210 05:51:34.169825   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:34.170091   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:34.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:51:34.669907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:34.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:34.670277   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:35.169820   51953 type.go:168] "Request Body" body=""
	I1210 05:51:35.169944   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:35.170278   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:35.669951   51953 type.go:168] "Request Body" body=""
	I1210 05:51:35.670023   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:35.670279   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:36.169953   51953 type.go:168] "Request Body" body=""
	I1210 05:51:36.170025   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:36.170358   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:36.670063   51953 type.go:168] "Request Body" body=""
	I1210 05:51:36.670133   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:36.670464   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:36.670516   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:37.170022   51953 type.go:168] "Request Body" body=""
	I1210 05:51:37.170090   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:37.170409   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:37.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:51:37.669901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:37.670272   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:38.169963   51953 type.go:168] "Request Body" body=""
	I1210 05:51:38.170040   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:38.170377   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:38.669747   51953 type.go:168] "Request Body" body=""
	I1210 05:51:38.669816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:38.670139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:39.170160   51953 type.go:168] "Request Body" body=""
	I1210 05:51:39.170230   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:39.170516   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:39.170556   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:39.670447   51953 type.go:168] "Request Body" body=""
	I1210 05:51:39.670519   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:39.670875   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:40.170718   51953 type.go:168] "Request Body" body=""
	I1210 05:51:40.170785   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:40.171078   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:40.670706   51953 type.go:168] "Request Body" body=""
	I1210 05:51:40.670781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:40.671102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:41.169776   51953 type.go:168] "Request Body" body=""
	I1210 05:51:41.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:41.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:41.670004   51953 type.go:168] "Request Body" body=""
	I1210 05:51:41.670161   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:41.670773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:41.670875   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:42.170771   51953 type.go:168] "Request Body" body=""
	I1210 05:51:42.170847   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:42.171213   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:42.669910   51953 type.go:168] "Request Body" body=""
	I1210 05:51:42.669982   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:42.670284   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:43.169986   51953 type.go:168] "Request Body" body=""
	I1210 05:51:43.170065   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:43.170327   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:43.669826   51953 type.go:168] "Request Body" body=""
	I1210 05:51:43.669901   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:43.670214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:44.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:51:44.169896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:44.170247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:44.170307   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:44.670143   51953 type.go:168] "Request Body" body=""
	I1210 05:51:44.670220   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:44.670489   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:45.170378   51953 type.go:168] "Request Body" body=""
	I1210 05:51:45.170522   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:45.171164   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:45.669904   51953 type.go:168] "Request Body" body=""
	I1210 05:51:45.669977   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:45.670266   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:46.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:51:46.169981   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:46.170247   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:46.669982   51953 type.go:168] "Request Body" body=""
	I1210 05:51:46.670065   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:46.670412   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:46.670463   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:47.170121   51953 type.go:168] "Request Body" body=""
	I1210 05:51:47.170196   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:47.170526   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:47.670279   51953 type.go:168] "Request Body" body=""
	I1210 05:51:47.670353   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:47.670622   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:48.170397   51953 type.go:168] "Request Body" body=""
	I1210 05:51:48.170475   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:48.170792   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:48.670571   51953 type.go:168] "Request Body" body=""
	I1210 05:51:48.670649   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:48.670997   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:48.671073   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:49.170679   51953 type.go:168] "Request Body" body=""
	I1210 05:51:49.170753   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:49.171102   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:49.670157   51953 type.go:168] "Request Body" body=""
	I1210 05:51:49.670228   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:49.670552   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:50.170360   51953 type.go:168] "Request Body" body=""
	I1210 05:51:50.170435   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:50.170752   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:50.670554   51953 type.go:168] "Request Body" body=""
	I1210 05:51:50.670636   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:50.670942   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:51.170729   51953 type.go:168] "Request Body" body=""
	I1210 05:51:51.170802   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:51.171139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:51.171187   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:51.669733   51953 type.go:168] "Request Body" body=""
	I1210 05:51:51.669807   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:51.670146   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:52.169850   51953 type.go:168] "Request Body" body=""
	I1210 05:51:52.169929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:52.170284   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:52.669819   51953 type.go:168] "Request Body" body=""
	I1210 05:51:52.669893   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:52.670207   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:53.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:51:53.169992   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:53.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:53.669991   51953 type.go:168] "Request Body" body=""
	I1210 05:51:53.670070   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:53.670340   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:53.670380   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:54.170031   51953 type.go:168] "Request Body" body=""
	I1210 05:51:54.170110   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:54.170441   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:54.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:51:54.670202   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:54.670529   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:55.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:51:55.169832   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:55.170177   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:55.669779   51953 type.go:168] "Request Body" body=""
	I1210 05:51:55.669851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:55.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:56.169901   51953 type.go:168] "Request Body" body=""
	I1210 05:51:56.169974   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:56.170321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:56.170373   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:56.669742   51953 type.go:168] "Request Body" body=""
	I1210 05:51:56.669816   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:56.670103   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:57.169781   51953 type.go:168] "Request Body" body=""
	I1210 05:51:57.169856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:57.170181   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:57.669889   51953 type.go:168] "Request Body" body=""
	I1210 05:51:57.669965   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:57.670291   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:58.170689   51953 type.go:168] "Request Body" body=""
	I1210 05:51:58.170758   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:58.171080   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:51:58.171123   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:51:58.669796   51953 type.go:168] "Request Body" body=""
	I1210 05:51:58.669872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:58.670204   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:59.170073   51953 type.go:168] "Request Body" body=""
	I1210 05:51:59.170150   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:59.170489   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:51:59.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:51:59.670202   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:51:59.670565   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:00.170445   51953 type.go:168] "Request Body" body=""
	I1210 05:52:00.170546   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:00.170880   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:00.669804   51953 type.go:168] "Request Body" body=""
	I1210 05:52:00.669883   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:00.670208   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:00.670259   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:01.169772   51953 type.go:168] "Request Body" body=""
	I1210 05:52:01.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:01.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:01.670021   51953 type.go:168] "Request Body" body=""
	I1210 05:52:01.670097   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:01.670433   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:02.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:52:02.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:02.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:02.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:52:02.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:02.670276   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:02.670355   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:03.170035   51953 type.go:168] "Request Body" body=""
	I1210 05:52:03.170108   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:03.170401   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:03.669817   51953 type.go:168] "Request Body" body=""
	I1210 05:52:03.669890   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:03.670202   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:04.169758   51953 type.go:168] "Request Body" body=""
	I1210 05:52:04.169836   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:04.170117   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:04.670079   51953 type.go:168] "Request Body" body=""
	I1210 05:52:04.670164   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:04.670516   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:04.670563   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:05.169839   51953 type.go:168] "Request Body" body=""
	I1210 05:52:05.169935   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:05.170260   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:05.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:52:05.669823   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:05.670097   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:06.169800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:06.169870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:06.170195   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:06.669851   51953 type.go:168] "Request Body" body=""
	I1210 05:52:06.669926   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:06.670297   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:07.169768   51953 type.go:168] "Request Body" body=""
	I1210 05:52:07.169841   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:07.170149   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:07.170196   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:07.669845   51953 type.go:168] "Request Body" body=""
	I1210 05:52:07.669915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:07.670239   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:08.169973   51953 type.go:168] "Request Body" body=""
	I1210 05:52:08.170047   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:08.170399   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:08.670082   51953 type.go:168] "Request Body" body=""
	I1210 05:52:08.670165   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:08.670485   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:09.170372   51953 type.go:168] "Request Body" body=""
	I1210 05:52:09.170444   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:09.170740   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:09.170790   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:09.670553   51953 type.go:168] "Request Body" body=""
	I1210 05:52:09.670631   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:09.670948   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:10.170667   51953 type.go:168] "Request Body" body=""
	I1210 05:52:10.170738   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:10.170996   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:10.669729   51953 type.go:168] "Request Body" body=""
	I1210 05:52:10.669805   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:10.670126   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:11.169831   51953 type.go:168] "Request Body" body=""
	I1210 05:52:11.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:11.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:11.669912   51953 type.go:168] "Request Body" body=""
	I1210 05:52:11.669979   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:11.670240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:11.670280   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:12.169939   51953 type.go:168] "Request Body" body=""
	I1210 05:52:12.170014   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:12.170362   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:12.670078   51953 type.go:168] "Request Body" body=""
	I1210 05:52:12.670162   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:12.670514   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:13.169756   51953 type.go:168] "Request Body" body=""
	I1210 05:52:13.169834   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:13.170093   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:13.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:52:13.669896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:13.670227   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:14.169820   51953 type.go:168] "Request Body" body=""
	I1210 05:52:14.169895   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:14.170241   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:14.170294   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:14.670030   51953 type.go:168] "Request Body" body=""
	I1210 05:52:14.670095   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:14.670375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:15.170120   51953 type.go:168] "Request Body" body=""
	I1210 05:52:15.170196   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:15.170539   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:15.670302   51953 type.go:168] "Request Body" body=""
	I1210 05:52:15.670373   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:15.670676   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:16.170432   51953 type.go:168] "Request Body" body=""
	I1210 05:52:16.170507   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:16.170803   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:16.170857   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:16.670503   51953 type.go:168] "Request Body" body=""
	I1210 05:52:16.670576   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:16.670887   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:17.170709   51953 type.go:168] "Request Body" body=""
	I1210 05:52:17.170781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:17.171089   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:17.669750   51953 type.go:168] "Request Body" body=""
	I1210 05:52:17.669817   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:17.670129   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:18.169829   51953 type.go:168] "Request Body" body=""
	I1210 05:52:18.169909   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:18.170246   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:18.669810   51953 type.go:168] "Request Body" body=""
	I1210 05:52:18.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:18.670224   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:18.670276   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:19.170163   51953 type.go:168] "Request Body" body=""
	I1210 05:52:19.170242   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:19.170554   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:19.670487   51953 type.go:168] "Request Body" body=""
	I1210 05:52:19.670569   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:19.670973   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:20.169737   51953 type.go:168] "Request Body" body=""
	I1210 05:52:20.169824   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:20.170206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:20.669861   51953 type.go:168] "Request Body" body=""
	I1210 05:52:20.669938   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:20.670209   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:21.169833   51953 type.go:168] "Request Body" body=""
	I1210 05:52:21.169904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:21.170238   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:21.170290   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:21.669832   51953 type.go:168] "Request Body" body=""
	I1210 05:52:21.669911   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:21.670234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:22.169913   51953 type.go:168] "Request Body" body=""
	I1210 05:52:22.169983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:22.170242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:22.669812   51953 type.go:168] "Request Body" body=""
	I1210 05:52:22.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:22.670179   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:23.169847   51953 type.go:168] "Request Body" body=""
	I1210 05:52:23.169930   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:23.170252   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:23.669962   51953 type.go:168] "Request Body" body=""
	I1210 05:52:23.670037   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:23.670326   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:23.670367   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:24.170037   51953 type.go:168] "Request Body" body=""
	I1210 05:52:24.170109   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:24.170439   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:24.670168   51953 type.go:168] "Request Body" body=""
	I1210 05:52:24.670241   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:24.670573   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:25.170350   51953 type.go:168] "Request Body" body=""
	I1210 05:52:25.170421   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:25.170687   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:25.670431   51953 type.go:168] "Request Body" body=""
	I1210 05:52:25.670504   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:25.670821   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:25.670873   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:26.170481   51953 type.go:168] "Request Body" body=""
	I1210 05:52:26.170555   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:26.170912   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:26.670658   51953 type.go:168] "Request Body" body=""
	I1210 05:52:26.670730   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:26.670998   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:27.169719   51953 type.go:168] "Request Body" body=""
	I1210 05:52:27.169797   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:27.170122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:27.669792   51953 type.go:168] "Request Body" body=""
	I1210 05:52:27.669870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:27.670184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:28.169778   51953 type.go:168] "Request Body" body=""
	I1210 05:52:28.169852   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:28.170172   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:28.170229   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:28.669766   51953 type.go:168] "Request Body" body=""
	I1210 05:52:28.669838   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:28.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:29.170045   51953 type.go:168] "Request Body" body=""
	I1210 05:52:29.170125   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:29.170415   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:29.670123   51953 type.go:168] "Request Body" body=""
	I1210 05:52:29.670193   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:29.670453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:30.170123   51953 type.go:168] "Request Body" body=""
	I1210 05:52:30.170199   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:30.170559   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:30.170635   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:30.670127   51953 type.go:168] "Request Body" body=""
	I1210 05:52:30.670200   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:30.670509   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:31.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:52:31.169839   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:31.170095   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:31.669800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:31.669875   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:31.670200   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:32.169800   51953 type.go:168] "Request Body" body=""
	I1210 05:52:32.169874   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:32.170198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:32.669769   51953 type.go:168] "Request Body" body=""
	I1210 05:52:32.669851   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:32.670162   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:32.670212   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:33.169842   51953 type.go:168] "Request Body" body=""
	I1210 05:52:33.169915   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:33.170245   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:33.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:52:33.669887   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:33.670199   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:34.169925   51953 type.go:168] "Request Body" body=""
	I1210 05:52:34.170009   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:34.170331   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:34.670116   51953 type.go:168] "Request Body" body=""
	I1210 05:52:34.670194   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:34.670515   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:34.670571   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:35.170367   51953 type.go:168] "Request Body" body=""
	I1210 05:52:35.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:35.170782   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:35.670577   51953 type.go:168] "Request Body" body=""
	I1210 05:52:35.670647   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:35.670912   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:36.170722   51953 type.go:168] "Request Body" body=""
	I1210 05:52:36.170802   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:36.171183   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:36.669843   51953 type.go:168] "Request Body" body=""
	I1210 05:52:36.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:36.670291   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:37.170702   51953 type.go:168] "Request Body" body=""
	I1210 05:52:37.170771   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:37.171105   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:37.171165   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:37.669824   51953 type.go:168] "Request Body" body=""
	I1210 05:52:37.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:37.670242   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:38.169838   51953 type.go:168] "Request Body" body=""
	I1210 05:52:38.169909   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:38.170276   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:38.669712   51953 type.go:168] "Request Body" body=""
	I1210 05:52:38.669779   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:38.670087   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:39.169961   51953 type.go:168] "Request Body" body=""
	I1210 05:52:39.170037   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:39.170366   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:39.670236   51953 type.go:168] "Request Body" body=""
	I1210 05:52:39.670306   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:39.670633   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:39.670687   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:40.170413   51953 type.go:168] "Request Body" body=""
	I1210 05:52:40.170482   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:40.170769   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:40.670553   51953 type.go:168] "Request Body" body=""
	I1210 05:52:40.670623   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:40.670995   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:41.169710   51953 type.go:168] "Request Body" body=""
	I1210 05:52:41.169781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:41.170119   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:41.669770   51953 type.go:168] "Request Body" body=""
	I1210 05:52:41.669843   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:41.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:42.169882   51953 type.go:168] "Request Body" body=""
	I1210 05:52:42.169973   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:42.170386   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:42.170462   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:42.670156   51953 type.go:168] "Request Body" body=""
	I1210 05:52:42.670236   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:42.670580   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:43.170323   51953 type.go:168] "Request Body" body=""
	I1210 05:52:43.170400   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:43.170660   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:43.670466   51953 type.go:168] "Request Body" body=""
	I1210 05:52:43.670547   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:43.670868   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:44.170674   51953 type.go:168] "Request Body" body=""
	I1210 05:52:44.170752   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:44.171091   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:44.171150   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:44.670034   51953 type.go:168] "Request Body" body=""
	I1210 05:52:44.670107   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:44.670403   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:45.170340   51953 type.go:168] "Request Body" body=""
	I1210 05:52:45.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:45.170941   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:45.670006   51953 type.go:168] "Request Body" body=""
	I1210 05:52:45.670100   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:45.670436   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:46.169753   51953 type.go:168] "Request Body" body=""
	I1210 05:52:46.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:46.170099   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:46.669811   51953 type.go:168] "Request Body" body=""
	I1210 05:52:46.669891   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:46.670235   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:46.670295   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:47.169851   51953 type.go:168] "Request Body" body=""
	I1210 05:52:47.169946   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:47.170312   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:47.669774   51953 type.go:168] "Request Body" body=""
	I1210 05:52:47.669840   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:47.670114   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:48.169824   51953 type.go:168] "Request Body" body=""
	I1210 05:52:48.169897   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:48.170265   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:48.669972   51953 type.go:168] "Request Body" body=""
	I1210 05:52:48.670045   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:48.670364   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:48.670428   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:49.170294   51953 type.go:168] "Request Body" body=""
	I1210 05:52:49.170370   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:49.170634   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:49.670676   51953 type.go:168] "Request Body" body=""
	I1210 05:52:49.670754   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:49.671078   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:50.169788   51953 type.go:168] "Request Body" body=""
	I1210 05:52:50.169866   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:50.170203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:50.669782   51953 type.go:168] "Request Body" body=""
	I1210 05:52:50.669858   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:50.670155   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:51.169834   51953 type.go:168] "Request Body" body=""
	I1210 05:52:51.169907   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:51.170263   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:51.170321   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:51.670020   51953 type.go:168] "Request Body" body=""
	I1210 05:52:51.670091   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:51.670371   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:52.170061   51953 type.go:168] "Request Body" body=""
	I1210 05:52:52.170132   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:52.170447   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:52.669850   51953 type.go:168] "Request Body" body=""
	I1210 05:52:52.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:52.670232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:53.169932   51953 type.go:168] "Request Body" body=""
	I1210 05:52:53.170009   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:53.170341   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:53.170397   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:53.670032   51953 type.go:168] "Request Body" body=""
	I1210 05:52:53.670105   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:53.670414   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:54.169881   51953 type.go:168] "Request Body" body=""
	I1210 05:52:54.169952   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:54.170302   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:54.670115   51953 type.go:168] "Request Body" body=""
	I1210 05:52:54.670186   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:54.670529   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:55.170256   51953 type.go:168] "Request Body" body=""
	I1210 05:52:55.170339   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:55.170657   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:55.170714   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:55.670524   51953 type.go:168] "Request Body" body=""
	I1210 05:52:55.670595   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:55.670950   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:56.170826   51953 type.go:168] "Request Body" body=""
	I1210 05:52:56.170903   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:56.171240   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:56.669759   51953 type.go:168] "Request Body" body=""
	I1210 05:52:56.669835   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:56.670153   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:57.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:52:57.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:57.170214   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:57.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:52:57.669910   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:57.670255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:57.670314   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:52:58.170422   51953 type.go:168] "Request Body" body=""
	I1210 05:52:58.170495   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:58.170808   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:58.670565   51953 type.go:168] "Request Body" body=""
	I1210 05:52:58.670637   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:58.670958   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:59.170654   51953 type.go:168] "Request Body" body=""
	I1210 05:52:59.170728   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:59.171071   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:52:59.669939   51953 type.go:168] "Request Body" body=""
	I1210 05:52:59.670007   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:52:59.670301   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:52:59.670342   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:00.169895   51953 type.go:168] "Request Body" body=""
	I1210 05:53:00.169990   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:00.170313   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:00.669978   51953 type.go:168] "Request Body" body=""
	I1210 05:53:00.670066   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:00.670428   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:01.169993   51953 type.go:168] "Request Body" body=""
	I1210 05:53:01.170074   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:01.170371   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:01.669848   51953 type.go:168] "Request Body" body=""
	I1210 05:53:01.669929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:01.670400   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:01.670459   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:02.169969   51953 type.go:168] "Request Body" body=""
	I1210 05:53:02.170067   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:02.170453   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:02.670163   51953 type.go:168] "Request Body" body=""
	I1210 05:53:02.670231   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:02.670544   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:03.170337   51953 type.go:168] "Request Body" body=""
	I1210 05:53:03.170414   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:03.170717   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:03.670398   51953 type.go:168] "Request Body" body=""
	I1210 05:53:03.670477   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:03.670784   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:03.670830   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:04.170568   51953 type.go:168] "Request Body" body=""
	I1210 05:53:04.170643   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:04.170962   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:04.669739   51953 type.go:168] "Request Body" body=""
	I1210 05:53:04.669820   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:04.670122   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:05.169736   51953 type.go:168] "Request Body" body=""
	I1210 05:53:05.169812   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:05.170142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:05.669731   51953 type.go:168] "Request Body" body=""
	I1210 05:53:05.669801   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:05.670088   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:06.169847   51953 type.go:168] "Request Body" body=""
	I1210 05:53:06.169919   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:06.170255   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:06.170309   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:06.669839   51953 type.go:168] "Request Body" body=""
	I1210 05:53:06.669932   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:06.670258   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:07.169929   51953 type.go:168] "Request Body" body=""
	I1210 05:53:07.170019   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:07.170306   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:07.670053   51953 type.go:168] "Request Body" body=""
	I1210 05:53:07.670138   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:07.670495   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:08.170204   51953 type.go:168] "Request Body" body=""
	I1210 05:53:08.170277   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:08.170612   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:08.170669   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:08.670407   51953 type.go:168] "Request Body" body=""
	I1210 05:53:08.670480   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:08.670802   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:09.170597   51953 type.go:168] "Request Body" body=""
	I1210 05:53:09.170672   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:09.171003   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:09.669816   51953 type.go:168] "Request Body" body=""
	I1210 05:53:09.669899   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:09.670277   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:10.169910   51953 type.go:168] "Request Body" body=""
	I1210 05:53:10.169986   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:10.170270   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:10.669995   51953 type.go:168] "Request Body" body=""
	I1210 05:53:10.670072   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:10.670375   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:10.670419   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:11.169900   51953 type.go:168] "Request Body" body=""
	I1210 05:53:11.169978   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:11.170309   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:11.669761   51953 type.go:168] "Request Body" body=""
	I1210 05:53:11.669842   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:11.670160   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:12.169855   51953 type.go:168] "Request Body" body=""
	I1210 05:53:12.169929   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:12.170311   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:12.669797   51953 type.go:168] "Request Body" body=""
	I1210 05:53:12.669877   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:12.670203   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:13.169791   51953 type.go:168] "Request Body" body=""
	I1210 05:53:13.169859   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:13.170213   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:13.170268   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:13.669861   51953 type.go:168] "Request Body" body=""
	I1210 05:53:13.669935   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:13.670288   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:14.169996   51953 type.go:168] "Request Body" body=""
	I1210 05:53:14.170071   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:14.170413   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:14.670109   51953 type.go:168] "Request Body" body=""
	I1210 05:53:14.670182   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:14.670458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:15.169830   51953 type.go:168] "Request Body" body=""
	I1210 05:53:15.169954   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:15.170288   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:15.170343   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:15.669901   51953 type.go:168] "Request Body" body=""
	I1210 05:53:15.669977   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:15.670322   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:16.169754   51953 type.go:168] "Request Body" body=""
	I1210 05:53:16.169825   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:16.170141   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:16.669852   51953 type.go:168] "Request Body" body=""
	I1210 05:53:16.669933   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:16.670259   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:17.169818   51953 type.go:168] "Request Body" body=""
	I1210 05:53:17.169896   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:17.170232   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:17.669789   51953 type.go:168] "Request Body" body=""
	I1210 05:53:17.669855   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:17.670121   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:17.670179   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:18.169880   51953 type.go:168] "Request Body" body=""
	I1210 05:53:18.169953   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:18.170308   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:18.670029   51953 type.go:168] "Request Body" body=""
	I1210 05:53:18.670125   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:18.670459   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:19.170387   51953 type.go:168] "Request Body" body=""
	I1210 05:53:19.170452   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:19.170715   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:19.670679   51953 type.go:168] "Request Body" body=""
	I1210 05:53:19.670747   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:19.671074   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:19.671133   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:20.169842   51953 type.go:168] "Request Body" body=""
	I1210 05:53:20.169925   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:20.170257   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:20.669763   51953 type.go:168] "Request Body" body=""
	I1210 05:53:20.669856   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:20.670120   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:21.169837   51953 type.go:168] "Request Body" body=""
	I1210 05:53:21.169912   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:21.170234   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:21.669987   51953 type.go:168] "Request Body" body=""
	I1210 05:53:21.670060   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:21.670390   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:22.170082   51953 type.go:168] "Request Body" body=""
	I1210 05:53:22.170158   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:22.170445   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:22.170499   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:22.669862   51953 type.go:168] "Request Body" body=""
	I1210 05:53:22.669943   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:22.670295   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:23.169959   51953 type.go:168] "Request Body" body=""
	I1210 05:53:23.170036   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:23.170370   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:23.669779   51953 type.go:168] "Request Body" body=""
	I1210 05:53:23.669847   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:23.670223   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:24.169812   51953 type.go:168] "Request Body" body=""
	I1210 05:53:24.169882   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:24.170274   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:24.670128   51953 type.go:168] "Request Body" body=""
	I1210 05:53:24.670208   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:24.670549   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:24.670605   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:25.170307   51953 type.go:168] "Request Body" body=""
	I1210 05:53:25.170382   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:25.170719   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:25.670478   51953 type.go:168] "Request Body" body=""
	I1210 05:53:25.670547   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:25.670828   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:26.170599   51953 type.go:168] "Request Body" body=""
	I1210 05:53:26.170671   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:26.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:26.669709   51953 type.go:168] "Request Body" body=""
	I1210 05:53:26.669782   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:26.670054   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:27.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:53:27.169831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:27.170139   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:27.170198   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:27.669834   51953 type.go:168] "Request Body" body=""
	I1210 05:53:27.669914   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:27.670219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:28.169757   51953 type.go:168] "Request Body" body=""
	I1210 05:53:28.169828   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:28.170132   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:28.669819   51953 type.go:168] "Request Body" body=""
	I1210 05:53:28.669888   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:28.670189   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:29.170163   51953 type.go:168] "Request Body" body=""
	I1210 05:53:29.170243   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:29.170572   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:29.170631   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:29.670270   51953 type.go:168] "Request Body" body=""
	I1210 05:53:29.670337   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:29.670607   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:30.170496   51953 type.go:168] "Request Body" body=""
	I1210 05:53:30.170584   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:30.170947   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:30.670768   51953 type.go:168] "Request Body" body=""
	I1210 05:53:30.670854   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:30.671206   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:31.169798   51953 type.go:168] "Request Body" body=""
	I1210 05:53:31.169872   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:31.170184   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:31.669871   51953 type.go:168] "Request Body" body=""
	I1210 05:53:31.669951   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:31.670321   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:31.670376   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:32.169884   51953 type.go:168] "Request Body" body=""
	I1210 05:53:32.169959   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:32.170251   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:32.669923   51953 type.go:168] "Request Body" body=""
	I1210 05:53:32.669991   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:32.670335   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:33.169817   51953 type.go:168] "Request Body" body=""
	I1210 05:53:33.169889   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:33.170220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:33.669802   51953 type.go:168] "Request Body" body=""
	I1210 05:53:33.669885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:33.670198   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:34.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:53:34.169833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:34.170101   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:34.170150   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:34.670055   51953 type.go:168] "Request Body" body=""
	I1210 05:53:34.670124   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:34.670458   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:35.170164   51953 type.go:168] "Request Body" body=""
	I1210 05:53:35.170239   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:35.170615   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:35.670406   51953 type.go:168] "Request Body" body=""
	I1210 05:53:35.670480   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:35.670747   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:36.170521   51953 type.go:168] "Request Body" body=""
	I1210 05:53:36.170600   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:36.170924   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:36.170976   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:36.670598   51953 type.go:168] "Request Body" body=""
	I1210 05:53:36.670673   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:36.671006   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:37.170525   51953 type.go:168] "Request Body" body=""
	I1210 05:53:37.170598   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:37.170929   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:37.670698   51953 type.go:168] "Request Body" body=""
	I1210 05:53:37.670771   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:37.671111   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:38.169748   51953 type.go:168] "Request Body" body=""
	I1210 05:53:38.169821   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:38.170152   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:38.670414   51953 type.go:168] "Request Body" body=""
	I1210 05:53:38.670482   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:38.670791   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:38.670843   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:39.170611   51953 type.go:168] "Request Body" body=""
	I1210 05:53:39.170682   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:39.171033   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:39.669757   51953 type.go:168] "Request Body" body=""
	I1210 05:53:39.669833   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:39.670145   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:40.169755   51953 type.go:168] "Request Body" body=""
	I1210 05:53:40.169827   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:40.170087   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:40.669801   51953 type.go:168] "Request Body" body=""
	I1210 05:53:40.669881   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:40.670219   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:41.169912   51953 type.go:168] "Request Body" body=""
	I1210 05:53:41.169995   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:41.170355   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:41.170412   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:41.670056   51953 type.go:168] "Request Body" body=""
	I1210 05:53:41.670122   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:41.670440   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:42.169858   51953 type.go:168] "Request Body" body=""
	I1210 05:53:42.169947   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:42.170336   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:42.670088   51953 type.go:168] "Request Body" body=""
	I1210 05:53:42.670163   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:42.670484   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:43.170162   51953 type.go:168] "Request Body" body=""
	I1210 05:53:43.170230   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:43.170547   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:43.170609   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:43.670381   51953 type.go:168] "Request Body" body=""
	I1210 05:53:43.670459   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:43.670797   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:44.170478   51953 type.go:168] "Request Body" body=""
	I1210 05:53:44.170553   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:44.170917   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:44.670710   51953 type.go:168] "Request Body" body=""
	I1210 05:53:44.670781   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:44.671096   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:45.169825   51953 type.go:168] "Request Body" body=""
	I1210 05:53:45.169927   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:45.170248   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:45.670167   51953 type.go:168] "Request Body" body=""
	I1210 05:53:45.670243   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:45.670596   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:45.670654   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:46.170356   51953 type.go:168] "Request Body" body=""
	I1210 05:53:46.170470   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:46.170775   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:46.670631   51953 type.go:168] "Request Body" body=""
	I1210 05:53:46.670706   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:46.671056   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:47.169777   51953 type.go:168] "Request Body" body=""
	I1210 05:53:47.169864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:47.170180   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:47.670484   51953 type.go:168] "Request Body" body=""
	I1210 05:53:47.670573   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:47.670850   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:47.670896   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:48.170703   51953 type.go:168] "Request Body" body=""
	I1210 05:53:48.170773   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:48.171186   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:48.669837   51953 type.go:168] "Request Body" body=""
	I1210 05:53:48.669922   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:48.670270   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:49.170239   51953 type.go:168] "Request Body" body=""
	I1210 05:53:49.170314   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:49.170632   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:49.670158   51953 type.go:168] "Request Body" body=""
	I1210 05:53:49.670250   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:49.670638   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:50.170456   51953 type.go:168] "Request Body" body=""
	I1210 05:53:50.170536   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:50.170897   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:50.170949   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:50.670681   51953 type.go:168] "Request Body" body=""
	I1210 05:53:50.670750   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:50.671080   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:51.169790   51953 type.go:168] "Request Body" body=""
	I1210 05:53:51.169870   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:51.170201   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:51.669911   51953 type.go:168] "Request Body" body=""
	I1210 05:53:51.669983   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:51.670289   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:52.169760   51953 type.go:168] "Request Body" body=""
	I1210 05:53:52.169885   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:52.170158   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:52.669815   51953 type.go:168] "Request Body" body=""
	I1210 05:53:52.669920   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:52.670250   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:52.670299   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:53.169827   51953 type.go:168] "Request Body" body=""
	I1210 05:53:53.169906   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:53.170220   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:53.669788   51953 type.go:168] "Request Body" body=""
	I1210 05:53:53.669864   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:53.670142   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:54.169882   51953 type.go:168] "Request Body" body=""
	I1210 05:53:54.169960   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:54.170294   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:54.670138   51953 type.go:168] "Request Body" body=""
	I1210 05:53:54.670217   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:54.670514   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:54.670556   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:55.169762   51953 type.go:168] "Request Body" body=""
	I1210 05:53:55.169829   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:55.170092   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:55.669829   51953 type.go:168] "Request Body" body=""
	I1210 05:53:55.669904   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:55.670235   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:56.169933   51953 type.go:168] "Request Body" body=""
	I1210 05:53:56.170005   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:56.170326   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:56.669987   51953 type.go:168] "Request Body" body=""
	I1210 05:53:56.670052   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:56.670317   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:57.170000   51953 type.go:168] "Request Body" body=""
	I1210 05:53:57.170105   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:57.170463   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:57.170520   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:57.670190   51953 type.go:168] "Request Body" body=""
	I1210 05:53:57.670263   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:57.670595   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:58.170369   51953 type.go:168] "Request Body" body=""
	I1210 05:53:58.170443   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:58.170773   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:58.670583   51953 type.go:168] "Request Body" body=""
	I1210 05:53:58.670669   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:58.671047   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:53:59.170051   51953 type.go:168] "Request Body" body=""
	I1210 05:53:59.170137   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:59.170479   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 05:53:59.170549   51953 node_ready.go:55] error getting node "functional-644034" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-644034": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 05:53:59.669763   51953 type.go:168] "Request Body" body=""
	I1210 05:53:59.669831   51953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-644034" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 05:53:59.670493   51953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 05:54:00.169895   51953 type.go:168] "Request Body" body=""
	I1210 05:54:00.170210   51953 node_ready.go:38] duration metric: took 6m0.000621671s for node "functional-644034" to be "Ready" ...
	I1210 05:54:00.173449   51953 out.go:203] 
	W1210 05:54:00.176680   51953 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 05:54:00.176713   51953 out.go:285] * 
	W1210 05:54:00.178858   51953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:54:00.215003   51953 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 05:54:08 functional-644034 containerd[5850]: time="2025-12-10T05:54:08.049216332Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:09 functional-644034 containerd[5850]: time="2025-12-10T05:54:09.127962117Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 10 05:54:09 functional-644034 containerd[5850]: time="2025-12-10T05:54:09.130217614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 10 05:54:09 functional-644034 containerd[5850]: time="2025-12-10T05:54:09.137682683Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:09 functional-644034 containerd[5850]: time="2025-12-10T05:54:09.138174810Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.117118352Z" level=info msg="No images store for sha256:7c7a98f5977d00426b0ab442a3313f38d8159556e5fd94c8cdab70d2b3d72bfe"
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.119575436Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-644034\""
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.128559554Z" level=info msg="ImageCreate event name:\"sha256:187b8b0a3596efc82d8108da07255f790e24f4da482c7a2aa9f3e56dbd5d3e50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.129618394Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-644034\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.938450474Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.941022095Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.943336087Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 10 05:54:10 functional-644034 containerd[5850]: time="2025-12-10T05:54:10.957350615Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 10 05:54:11 functional-644034 containerd[5850]: time="2025-12-10T05:54:11.985409887Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 10 05:54:11 functional-644034 containerd[5850]: time="2025-12-10T05:54:11.987603597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 10 05:54:11 functional-644034 containerd[5850]: time="2025-12-10T05:54:11.999030250Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:11 functional-644034 containerd[5850]: time="2025-12-10T05:54:11.999845731Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.020723561Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.023105509Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.025084242Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.032702978Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.168636692Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.170776379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.179474248Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 05:54:12 functional-644034 containerd[5850]: time="2025-12-10T05:54:12.180046114Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:54:16.368985    9944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:16.369472    9944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:16.370931    9944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:16.371279    9944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:54:16.372715    9944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 05:54:16 up 36 min,  0 user,  load average: 0.47, 0.40, 0.59
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 05:54:13 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:13 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 10 05:54:13 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:13 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:13 functional-644034 kubelet[9805]: E1210 05:54:13.982222    9805 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:13 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:13 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:14 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 10 05:54:14 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:14 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:14 functional-644034 kubelet[9821]: E1210 05:54:14.730461    9821 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:14 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:14 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:15 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 10 05:54:15 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:15 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:15 functional-644034 kubelet[9849]: E1210 05:54:15.460721    9849 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:15 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:15 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:54:16 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 830.
	Dec 10 05:54:16 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:16 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:54:16 functional-644034 kubelet[9907]: E1210 05:54:16.228192    9907 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:54:16 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:54:16 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (371.144938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (735.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644034 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 05:56:44.575529    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:58:37.013487    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:00.083361    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:01:44.571378    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:03:37.013828    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-644034 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m13.427759049s)

                                                
                                                
-- stdout --
	* [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00011534s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-644034 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m13.428973451s for "functional-644034" cluster.
I1210 06:06:30.810356    4116 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 2 (371.675674ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-944360 image ls --format yaml --alsologtostderr                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh     │ functional-944360 ssh pgrep buildkitd                                                                                                                 │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ image   │ functional-944360 image ls --format json --alsologtostderr                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image ls --format table --alsologtostderr                                                                                           │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image build -t localhost/my-image:functional-944360 testdata/build --alsologtostderr                                                │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image ls                                                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ delete  │ -p functional-944360                                                                                                                                  │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ start   │ -p functional-644034 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ start   │ -p functional-644034 --alsologtostderr -v=8                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │                     │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:latest                                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add minikube-local-cache-test:functional-644034                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache delete minikube-local-cache-test:functional-644034                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl images                                                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	│ cache   │ functional-644034 cache reload                                                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ kubectl │ functional-644034 kubectl -- --context functional-644034 get pods                                                                                     │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	│ start   │ -p functional-644034 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:54:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:54:17.426935   57716 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:54:17.427082   57716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:17.427086   57716 out.go:374] Setting ErrFile to fd 2...
	I1210 05:54:17.427090   57716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:17.427361   57716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:54:17.427717   57716 out.go:368] Setting JSON to false
	I1210 05:54:17.428531   57716 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2208,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:54:17.428587   57716 start.go:143] virtualization:  
	I1210 05:54:17.432151   57716 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:54:17.435955   57716 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:54:17.436010   57716 notify.go:221] Checking for updates...
	I1210 05:54:17.441966   57716 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:54:17.444885   57716 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:54:17.447901   57716 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:54:17.450919   57716 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:54:17.453767   57716 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:54:17.457197   57716 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:54:17.457296   57716 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:54:17.484154   57716 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:54:17.484249   57716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:54:17.544910   57716 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 05:54:17.535741476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:54:17.545002   57716 docker.go:319] overlay module found
	I1210 05:54:17.548056   57716 out.go:179] * Using the docker driver based on existing profile
	I1210 05:54:17.550880   57716 start.go:309] selected driver: docker
	I1210 05:54:17.550888   57716 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:17.550973   57716 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:54:17.551147   57716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:54:17.606051   57716 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 05:54:17.597194445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:54:17.606475   57716 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:54:17.606497   57716 cni.go:84] Creating CNI manager for ""
	I1210 05:54:17.606551   57716 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:54:17.606592   57716 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:17.611686   57716 out.go:179] * Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	I1210 05:54:17.614501   57716 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 05:54:17.617345   57716 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:54:17.620208   57716 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:54:17.620284   57716 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:54:17.639591   57716 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:54:17.639602   57716 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 05:54:17.674108   57716 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 05:54:17.814864   57716 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 05:54:17.815057   57716 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json ...
	I1210 05:54:17.815157   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:17.815311   57716 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:54:17.815341   57716 start.go:360] acquireMachinesLock for functional-644034: {Name:mk0dde6f976baac8ab90670ad27c806ab702c4c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:17.815383   57716 start.go:364] duration metric: took 26.643µs to acquireMachinesLock for "functional-644034"
	I1210 05:54:17.815394   57716 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:54:17.815398   57716 fix.go:54] fixHost starting: 
	I1210 05:54:17.815657   57716 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:54:17.832534   57716 fix.go:112] recreateIfNeeded on functional-644034: state=Running err=<nil>
	W1210 05:54:17.832556   57716 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:54:17.836244   57716 out.go:252] * Updating the running docker "functional-644034" container ...
	I1210 05:54:17.836271   57716 machine.go:94] provisionDockerMachine start ...
	I1210 05:54:17.836346   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:17.858100   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:17.858407   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:17.858412   57716 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:54:17.974240   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:18.011085   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:54:18.011101   57716 ubuntu.go:182] provisioning hostname "functional-644034"
	I1210 05:54:18.011170   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.035073   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:18.035392   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:18.035402   57716 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-644034 && echo "functional-644034" | sudo tee /etc/hostname
	I1210 05:54:18.133146   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:18.205140   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:54:18.205224   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.223112   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:18.223456   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:18.223470   57716 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-644034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644034/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-644034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:54:18.298229   57716 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298265   57716 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298312   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:54:18.298319   57716 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.857µs
	I1210 05:54:18.298326   57716 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:54:18.298329   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 05:54:18.298336   57716 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298351   57716 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 82.455µs
	I1210 05:54:18.298357   57716 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298363   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:54:18.298368   57716 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.182µs
	I1210 05:54:18.298372   57716 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:54:18.298368   57716 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298381   57716 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298411   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 05:54:18.298406   57716 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298417   57716 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 37.08µs
	I1210 05:54:18.298422   57716 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 05:54:18.298434   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 05:54:18.298430   57716 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298438   57716 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 33.1µs
	I1210 05:54:18.298443   57716 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 05:54:18.298232   57716 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298464   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 05:54:18.298468   57716 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 256.891µs
	I1210 05:54:18.298472   57716 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298474   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 05:54:18.298480   57716 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 50.314µs
	I1210 05:54:18.298482   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 05:54:18.298484   57716 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298489   57716 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 122.242µs
	I1210 05:54:18.298496   57716 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298511   57716 cache.go:87] Successfully saved all images to host disk.
	I1210 05:54:18.371362   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:54:18.371378   57716 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 05:54:18.371397   57716 ubuntu.go:190] setting up certificates
	I1210 05:54:18.371416   57716 provision.go:84] configureAuth start
	I1210 05:54:18.371483   57716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:54:18.389550   57716 provision.go:143] copyHostCerts
	I1210 05:54:18.389620   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 05:54:18.389627   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:54:18.389704   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 05:54:18.389803   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 05:54:18.389808   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:54:18.389833   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 05:54:18.389882   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 05:54:18.389885   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:54:18.389906   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 05:54:18.389948   57716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.functional-644034 san=[127.0.0.1 192.168.49.2 functional-644034 localhost minikube]
	I1210 05:54:18.683488   57716 provision.go:177] copyRemoteCerts
	I1210 05:54:18.683553   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:54:18.683598   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.701578   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:18.806523   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:54:18.823889   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:54:18.841176   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:54:18.858693   57716 provision.go:87] duration metric: took 487.253139ms to configureAuth
	I1210 05:54:18.858709   57716 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:54:18.858903   57716 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:54:18.858907   57716 machine.go:97] duration metric: took 1.02263281s to provisionDockerMachine
	I1210 05:54:18.858914   57716 start.go:293] postStartSetup for "functional-644034" (driver="docker")
	I1210 05:54:18.858924   57716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:54:18.858977   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:54:18.859033   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.876377   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:18.982817   57716 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:54:18.986081   57716 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:54:18.986098   57716 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:54:18.986108   57716 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 05:54:18.986162   57716 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 05:54:18.986244   57716 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 05:54:18.986314   57716 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> hosts in /etc/test/nested/copy/4116
	I1210 05:54:18.986361   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4116
	I1210 05:54:18.994265   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:54:19.014263   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts --> /etc/test/nested/copy/4116/hosts (40 bytes)
	I1210 05:54:19.031905   57716 start.go:296] duration metric: took 172.976805ms for postStartSetup
	I1210 05:54:19.031977   57716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:54:19.032030   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.049399   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.152285   57716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:54:19.157124   57716 fix.go:56] duration metric: took 1.341718894s for fixHost
	I1210 05:54:19.157140   57716 start.go:83] releasing machines lock for "functional-644034", held for 1.341749918s
	I1210 05:54:19.157254   57716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:54:19.178380   57716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:54:19.178438   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.178590   57716 ssh_runner.go:195] Run: cat /version.json
	I1210 05:54:19.178645   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.200917   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.208552   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.319193   57716 ssh_runner.go:195] Run: systemctl --version
	I1210 05:54:19.412255   57716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:54:19.416947   57716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:54:19.417021   57716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:54:19.424890   57716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:54:19.424903   57716 start.go:496] detecting cgroup driver to use...
	I1210 05:54:19.424932   57716 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:54:19.425004   57716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 05:54:19.440745   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:54:19.453977   57716 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:54:19.454039   57716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:54:19.469832   57716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:54:19.482994   57716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:54:19.599891   57716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:54:19.715074   57716 docker.go:234] disabling docker service ...
	I1210 05:54:19.715128   57716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:54:19.730660   57716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:54:19.743680   57716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:54:19.856717   57716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:54:20.006361   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:54:20.021419   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:54:20.038786   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.191836   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:54:20.201486   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 05:54:20.210685   57716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:54:20.210748   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:54:20.219896   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:54:20.228857   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:54:20.237489   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:54:20.246148   57716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:54:20.253998   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:54:20.262613   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:54:20.271236   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:54:20.280061   57716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:54:20.287623   57716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:54:20.295156   57716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:54:20.415485   57716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:54:20.529881   57716 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 05:54:20.529941   57716 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 05:54:20.533915   57716 start.go:564] Will wait 60s for crictl version
	I1210 05:54:20.533980   57716 ssh_runner.go:195] Run: which crictl
	I1210 05:54:20.537488   57716 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:54:20.562843   57716 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 05:54:20.562909   57716 ssh_runner.go:195] Run: containerd --version
	I1210 05:54:20.586515   57716 ssh_runner.go:195] Run: containerd --version
	I1210 05:54:20.613476   57716 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 05:54:20.616435   57716 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:54:20.632538   57716 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:54:20.639504   57716 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 05:54:20.642345   57716 kubeadm.go:884] updating cluster {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:54:20.642611   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.817647   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.968512   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:21.117681   57716 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:54:21.117754   57716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:54:21.141602   57716 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 05:54:21.141614   57716 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:54:21.141620   57716 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1210 05:54:21.141710   57716 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:54:21.141768   57716 ssh_runner.go:195] Run: sudo crictl info
	I1210 05:54:21.167304   57716 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 05:54:21.167327   57716 cni.go:84] Creating CNI manager for ""
	I1210 05:54:21.167335   57716 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:54:21.167343   57716 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:54:21.167363   57716 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644034 NodeName:functional-644034 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:54:21.167468   57716 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-644034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:54:21.167528   57716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:54:21.175157   57716 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:54:21.175220   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:54:21.182336   57716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 05:54:21.194714   57716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:54:21.206951   57716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1210 05:54:21.218855   57716 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:54:21.222543   57716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:54:21.341027   57716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:54:21.356762   57716 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034 for IP: 192.168.49.2
	I1210 05:54:21.356773   57716 certs.go:195] generating shared ca certs ...
	I1210 05:54:21.356789   57716 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:54:21.356923   57716 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 05:54:21.356964   57716 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 05:54:21.356970   57716 certs.go:257] generating profile certs ...
	I1210 05:54:21.357053   57716 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key
	I1210 05:54:21.357114   57716 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c
	I1210 05:54:21.357152   57716 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key
	I1210 05:54:21.357258   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 05:54:21.357288   57716 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 05:54:21.357307   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:54:21.357333   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:54:21.357354   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:54:21.357375   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 05:54:21.357423   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:54:21.357978   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:54:21.378744   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:54:21.397697   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:54:21.419957   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:54:21.438314   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:54:21.455834   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:54:21.473865   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:54:21.494612   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:54:21.512109   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:54:21.529720   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 05:54:21.547670   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 05:54:21.568707   57716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:54:21.582063   57716 ssh_runner.go:195] Run: openssl version
	I1210 05:54:21.588394   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.595862   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:54:21.603363   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.607193   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.607247   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.648234   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:54:21.655574   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.662804   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 05:54:21.670452   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.674182   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.674235   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.715273   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:54:21.722425   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.729498   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 05:54:21.736743   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.740323   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.740376   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.780972   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:54:21.788152   57716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:54:21.791770   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:54:21.832469   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:54:21.875333   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:54:21.915959   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:54:21.956552   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:54:21.998157   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:54:22.041430   57716 kubeadm.go:401] StartCluster: {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:22.041511   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 05:54:22.041600   57716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:54:22.071281   57716 cri.go:89] found id: ""
	I1210 05:54:22.071348   57716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:54:22.079286   57716 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:54:22.079296   57716 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:54:22.079350   57716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:54:22.086777   57716 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.087401   57716 kubeconfig.go:125] found "functional-644034" server: "https://192.168.49.2:8441"
	I1210 05:54:22.088728   57716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:54:22.096851   57716 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 05:39:45.645176984 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 05:54:21.211483495 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 05:54:22.096860   57716 kubeadm.go:1161] stopping kube-system containers ...
	I1210 05:54:22.096878   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1210 05:54:22.096937   57716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:54:22.122240   57716 cri.go:89] found id: ""
	I1210 05:54:22.122301   57716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 05:54:22.139987   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:54:22.147655   57716 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 05:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 10 05:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 05:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 05:43 /etc/kubernetes/scheduler.conf
	
	I1210 05:54:22.147725   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:54:22.155240   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:54:22.163328   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.163381   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:54:22.170477   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:54:22.178188   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.178242   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:54:22.185324   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:54:22.192557   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.192613   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:54:22.199756   57716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:54:22.207462   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:22.254516   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:23.834868   57716 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.580327189s)
	I1210 05:54:23.834928   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.033268   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.102476   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.150822   57716 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:54:24.150892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:24.651134   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:25.151026   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:25.651869   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:26.151216   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:26.651981   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:27.151958   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:27.651059   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:28.151711   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:28.651801   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:29.151170   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:29.651851   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:30.151157   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:30.651654   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:31.151084   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:31.651758   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:32.151508   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:32.651099   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:33.151680   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:33.651643   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:34.151101   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:34.651107   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:35.150988   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:35.651892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:36.151153   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:36.651103   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:37.151414   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:37.651563   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:38.151178   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:38.651401   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:39.150956   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:39.650979   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:40.151904   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:40.651104   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:41.151273   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:41.651040   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:42.151823   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:42.651144   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:43.151448   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:43.651999   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:44.151103   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:44.651111   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:45.151308   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:45.651953   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:46.151727   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:46.651656   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:47.151732   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:47.651342   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:48.151209   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:48.651132   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:49.151140   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:49.651706   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:50.151487   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:50.651289   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:51.150961   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:51.651096   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:52.150968   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:52.651629   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:53.151897   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:53.651111   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:54.151375   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:54.651108   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:55.151036   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:55.651733   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:56.151260   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:56.651152   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:57.150960   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:57.651169   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:58.151105   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:58.651487   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:59.151042   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:59.651058   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:00.151456   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:00.650980   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:01.151155   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:01.651260   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:02.151783   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:02.651522   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:03.151955   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:03.651242   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:04.151318   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:04.651176   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:05.151161   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:05.651848   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:06.151100   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:06.651828   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:07.151113   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:07.651938   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:08.151467   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:08.651101   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:09.151624   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:09.651209   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:10.151745   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:10.651031   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:11.151720   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:11.651857   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:12.151769   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:12.651470   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:13.151212   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:13.651104   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:14.151106   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:14.651144   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:15.151130   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:15.652008   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:16.151440   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:16.651880   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:17.151343   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:17.651404   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:18.150959   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:18.651272   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:19.151991   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:19.651605   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:20.151125   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:20.651248   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:21.151762   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:21.651604   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:22.151314   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:22.651440   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:23.151928   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:23.651890   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:24.151853   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:24.151952   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:24.176715   57716 cri.go:89] found id: ""
	I1210 05:55:24.176729   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.176736   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:24.176741   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:24.176801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:24.199798   57716 cri.go:89] found id: ""
	I1210 05:55:24.199811   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.199819   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:24.199824   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:24.199881   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:24.223446   57716 cri.go:89] found id: ""
	I1210 05:55:24.223459   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.223466   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:24.223471   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:24.223533   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:24.247963   57716 cri.go:89] found id: ""
	I1210 05:55:24.247976   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.247984   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:24.247989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:24.248052   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:24.271064   57716 cri.go:89] found id: ""
	I1210 05:55:24.271078   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.271085   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:24.271090   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:24.271156   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:24.295582   57716 cri.go:89] found id: ""
	I1210 05:55:24.295595   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.295603   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:24.295608   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:24.295665   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:24.319439   57716 cri.go:89] found id: ""
	I1210 05:55:24.319459   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.319466   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:24.319474   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:24.319484   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:24.374536   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:24.374555   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:24.385677   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:24.385693   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:24.468968   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:24.460916   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.461690   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463385   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463760   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.465289   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:24.460916   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.461690   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463385   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463760   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.465289   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:24.468989   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:24.469008   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:24.534097   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:24.534114   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:27.065851   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:27.076794   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:27.076855   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:27.102051   57716 cri.go:89] found id: ""
	I1210 05:55:27.102064   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.102072   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:27.102087   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:27.102159   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:27.125833   57716 cri.go:89] found id: ""
	I1210 05:55:27.125846   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.125853   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:27.125858   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:27.125916   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:27.150782   57716 cri.go:89] found id: ""
	I1210 05:55:27.150795   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.150803   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:27.150808   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:27.150870   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:27.177446   57716 cri.go:89] found id: ""
	I1210 05:55:27.177459   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.177467   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:27.177472   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:27.177530   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:27.202542   57716 cri.go:89] found id: ""
	I1210 05:55:27.202557   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.202564   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:27.202570   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:27.202631   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:27.229302   57716 cri.go:89] found id: ""
	I1210 05:55:27.229316   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.229323   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:27.229328   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:27.229389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:27.258140   57716 cri.go:89] found id: ""
	I1210 05:55:27.258154   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.258162   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:27.258170   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:27.258179   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:27.313276   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:27.313296   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:27.324237   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:27.324252   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:27.386291   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:27.378930   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.379718   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381201   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381605   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.383124   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:27.378930   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.379718   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381201   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381605   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.383124   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:27.386311   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:27.386321   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:27.451779   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:27.451797   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:29.984865   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:29.994990   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:29.995106   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:30.034785   57716 cri.go:89] found id: ""
	I1210 05:55:30.034800   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.034808   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:30.034815   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:30.034899   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:30.063792   57716 cri.go:89] found id: ""
	I1210 05:55:30.063807   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.063816   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:30.063821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:30.063895   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:30.095916   57716 cri.go:89] found id: ""
	I1210 05:55:30.095931   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.095939   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:30.095945   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:30.096020   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:30.123266   57716 cri.go:89] found id: ""
	I1210 05:55:30.123293   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.123300   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:30.123306   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:30.123378   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:30.149145   57716 cri.go:89] found id: ""
	I1210 05:55:30.149159   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.149167   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:30.149173   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:30.149231   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:30.178515   57716 cri.go:89] found id: ""
	I1210 05:55:30.178529   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.178536   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:30.178541   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:30.178601   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:30.202938   57716 cri.go:89] found id: ""
	I1210 05:55:30.202952   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.202959   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:30.202968   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:30.202977   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:30.262024   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:30.262042   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:30.273395   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:30.273411   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:30.339082   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:30.331422   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.332246   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.333884   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.334216   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.335714   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:30.331422   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.332246   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.333884   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.334216   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.335714   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:30.339099   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:30.339111   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:30.401574   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:30.401599   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:32.947286   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:32.957296   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:32.957360   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:32.982165   57716 cri.go:89] found id: ""
	I1210 05:55:32.982179   57716 logs.go:282] 0 containers: []
	W1210 05:55:32.982186   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:32.982191   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:32.982247   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:33.020504   57716 cri.go:89] found id: ""
	I1210 05:55:33.020517   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.020525   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:33.020530   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:33.020590   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:33.045171   57716 cri.go:89] found id: ""
	I1210 05:55:33.045185   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.045193   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:33.045198   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:33.045261   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:33.069898   57716 cri.go:89] found id: ""
	I1210 05:55:33.069923   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.069931   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:33.069936   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:33.070003   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:33.094592   57716 cri.go:89] found id: ""
	I1210 05:55:33.094607   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.094614   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:33.094619   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:33.094687   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:33.119752   57716 cri.go:89] found id: ""
	I1210 05:55:33.119765   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.119772   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:33.119778   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:33.119842   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:33.144728   57716 cri.go:89] found id: ""
	I1210 05:55:33.144742   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.144749   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:33.144757   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:33.144767   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:33.202510   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:33.202527   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:33.213898   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:33.213914   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:33.276996   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:33.269599   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.270004   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.271689   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.272071   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.273649   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:33.269599   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.270004   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.271689   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.272071   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.273649   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:33.277006   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:33.277016   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:33.337654   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:33.337675   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:35.867520   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:35.877494   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:35.877552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:35.903487   57716 cri.go:89] found id: ""
	I1210 05:55:35.903501   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.903508   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:35.903514   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:35.903571   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:35.933040   57716 cri.go:89] found id: ""
	I1210 05:55:35.933054   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.933060   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:35.933066   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:35.933150   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:35.956439   57716 cri.go:89] found id: ""
	I1210 05:55:35.956453   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.956460   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:35.956466   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:35.956522   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:35.983120   57716 cri.go:89] found id: ""
	I1210 05:55:35.983133   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.983140   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:35.983155   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:35.983213   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:36.024072   57716 cri.go:89] found id: ""
	I1210 05:55:36.024085   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.024093   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:36.024098   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:36.024163   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:36.050259   57716 cri.go:89] found id: ""
	I1210 05:55:36.050282   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.050289   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:36.050296   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:36.050375   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:36.079897   57716 cri.go:89] found id: ""
	I1210 05:55:36.079911   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.079918   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:36.079925   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:36.079935   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:36.109390   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:36.109405   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:36.164390   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:36.164407   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:36.175368   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:36.175383   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:36.247833   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:36.240230   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.240985   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.242571   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.243126   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.244643   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:36.240230   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.240985   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.242571   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.243126   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.244643   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:36.247845   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:36.247855   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:38.808939   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:38.819051   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:38.819128   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:38.843620   57716 cri.go:89] found id: ""
	I1210 05:55:38.843643   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.843650   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:38.843656   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:38.843713   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:38.872120   57716 cri.go:89] found id: ""
	I1210 05:55:38.872134   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.872141   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:38.872147   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:38.872204   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:38.896725   57716 cri.go:89] found id: ""
	I1210 05:55:38.896738   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.896746   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:38.896751   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:38.896807   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:38.924643   57716 cri.go:89] found id: ""
	I1210 05:55:38.924657   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.924665   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:38.924670   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:38.924729   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:38.952693   57716 cri.go:89] found id: ""
	I1210 05:55:38.952706   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.952714   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:38.952719   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:38.952774   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:38.976175   57716 cri.go:89] found id: ""
	I1210 05:55:38.976189   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.976196   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:38.976201   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:38.976266   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:39.001657   57716 cri.go:89] found id: ""
	I1210 05:55:39.001671   57716 logs.go:282] 0 containers: []
	W1210 05:55:39.001678   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:39.001686   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:39.001698   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:39.013220   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:39.013240   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:39.084372   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:39.073429   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.077278   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.078119   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.079595   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.080038   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:39.073429   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.077278   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.078119   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.079595   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.080038   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:39.084383   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:39.084393   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:39.145338   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:39.145357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:39.173909   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:39.173925   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:41.731159   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:41.741270   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:41.741329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:41.765933   57716 cri.go:89] found id: ""
	I1210 05:55:41.765946   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.765953   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:41.765958   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:41.766034   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:41.790822   57716 cri.go:89] found id: ""
	I1210 05:55:41.790842   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.790850   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:41.790855   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:41.790924   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:41.817287   57716 cri.go:89] found id: ""
	I1210 05:55:41.817300   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.817312   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:41.817318   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:41.817386   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:41.842964   57716 cri.go:89] found id: ""
	I1210 05:55:41.842978   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.842986   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:41.842991   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:41.843068   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:41.871615   57716 cri.go:89] found id: ""
	I1210 05:55:41.871629   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.871637   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:41.871642   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:41.871699   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:41.896188   57716 cri.go:89] found id: ""
	I1210 05:55:41.896216   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.896223   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:41.896229   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:41.896294   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:41.930282   57716 cri.go:89] found id: ""
	I1210 05:55:41.930296   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.930303   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:41.930311   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:41.930320   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:41.985380   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:41.985397   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:42.004532   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:42.004551   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:42.075101   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:42.065585   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.066618   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.068626   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.069338   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.071222   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:42.065585   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.066618   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.068626   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.069338   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.071222   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:42.075129   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:42.075143   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:42.145894   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:42.145929   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:44.679885   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:44.690876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:44.690937   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:44.720897   57716 cri.go:89] found id: ""
	I1210 05:55:44.720911   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.720918   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:44.720923   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:44.720983   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:44.745408   57716 cri.go:89] found id: ""
	I1210 05:55:44.745421   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.745427   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:44.745432   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:44.745495   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:44.773707   57716 cri.go:89] found id: ""
	I1210 05:55:44.773721   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.773728   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:44.773733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:44.773792   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:44.798508   57716 cri.go:89] found id: ""
	I1210 05:55:44.798522   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.798529   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:44.798535   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:44.798597   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:44.822493   57716 cri.go:89] found id: ""
	I1210 05:55:44.822507   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.822515   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:44.822519   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:44.822578   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:44.847294   57716 cri.go:89] found id: ""
	I1210 05:55:44.847308   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.847316   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:44.847321   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:44.847380   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:44.870447   57716 cri.go:89] found id: ""
	I1210 05:55:44.870460   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.870468   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:44.870475   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:44.870485   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:44.926160   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:44.926177   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:44.937022   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:44.937037   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:45.007191   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:44.990257   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.990902   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992455   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992971   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:45.000009   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:44.990257   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.990902   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992455   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992971   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:45.000009   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:45.007203   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:45.007215   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:45.103439   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:45.103467   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:47.653520   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:47.663666   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:47.663731   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:47.697444   57716 cri.go:89] found id: ""
	I1210 05:55:47.697457   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.697464   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:47.697469   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:47.697529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:47.728308   57716 cri.go:89] found id: ""
	I1210 05:55:47.728322   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.728329   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:47.728334   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:47.728391   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:47.753518   57716 cri.go:89] found id: ""
	I1210 05:55:47.753531   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.753538   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:47.753543   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:47.753600   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:47.777296   57716 cri.go:89] found id: ""
	I1210 05:55:47.777309   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.777316   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:47.777322   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:47.777378   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:47.800977   57716 cri.go:89] found id: ""
	I1210 05:55:47.800998   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.801005   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:47.801010   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:47.801067   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:47.825052   57716 cri.go:89] found id: ""
	I1210 05:55:47.825065   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.825073   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:47.825078   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:47.825147   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:47.848863   57716 cri.go:89] found id: ""
	I1210 05:55:47.848876   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.848883   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:47.848892   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:47.848902   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:47.905124   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:47.905139   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:47.915783   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:47.915800   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:47.980730   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:47.973198   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.973889   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.975569   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.976034   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.977547   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:47.973198   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.973889   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.975569   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.976034   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.977547   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:47.980740   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:47.980750   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:48.042937   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:48.042955   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:50.581353   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:50.591210   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:50.591269   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:50.620774   57716 cri.go:89] found id: ""
	I1210 05:55:50.620788   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.620794   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:50.620800   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:50.620864   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:50.645050   57716 cri.go:89] found id: ""
	I1210 05:55:50.645064   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.645071   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:50.645082   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:50.645146   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:50.679878   57716 cri.go:89] found id: ""
	I1210 05:55:50.679890   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.679897   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:50.679903   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:50.679960   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:50.710005   57716 cri.go:89] found id: ""
	I1210 05:55:50.710018   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.710026   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:50.710032   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:50.710088   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:50.744288   57716 cri.go:89] found id: ""
	I1210 05:55:50.744302   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.744311   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:50.744317   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:50.744373   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:50.767954   57716 cri.go:89] found id: ""
	I1210 05:55:50.767967   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.767974   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:50.767980   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:50.768037   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:50.796157   57716 cri.go:89] found id: ""
	I1210 05:55:50.796171   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.796179   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:50.796186   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:50.796196   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:50.851621   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:50.851638   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:50.863074   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:50.863091   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:50.939619   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:50.930950   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.931767   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.933629   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.934152   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.935732   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:50.930950   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.931767   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.933629   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.934152   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.935732   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:50.939629   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:50.939639   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:51.008577   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:51.008598   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:53.537065   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:53.546821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:53.546878   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:53.571853   57716 cri.go:89] found id: ""
	I1210 05:55:53.571867   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.571874   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:53.571879   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:53.571937   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:53.595941   57716 cri.go:89] found id: ""
	I1210 05:55:53.595955   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.595962   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:53.595967   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:53.596023   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:53.620466   57716 cri.go:89] found id: ""
	I1210 05:55:53.620480   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.620486   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:53.620492   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:53.620546   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:53.643628   57716 cri.go:89] found id: ""
	I1210 05:55:53.643641   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.643649   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:53.643655   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:53.643711   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:53.673517   57716 cri.go:89] found id: ""
	I1210 05:55:53.673532   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.673539   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:53.673545   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:53.673601   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:53.709885   57716 cri.go:89] found id: ""
	I1210 05:55:53.709899   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.709906   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:53.709911   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:53.709974   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:53.739765   57716 cri.go:89] found id: ""
	I1210 05:55:53.739778   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.739785   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:53.739792   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:53.739802   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:53.795061   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:53.795080   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:53.806101   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:53.806117   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:53.872226   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:53.863177   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.863668   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865274   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865802   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.867416   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:53.863177   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.863668   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865274   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865802   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.867416   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:53.872238   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:53.872248   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:53.933601   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:53.933619   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:56.466912   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:56.476796   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:56.476855   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:56.501021   57716 cri.go:89] found id: ""
	I1210 05:55:56.501035   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.501042   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:56.501048   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:56.501109   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:56.524562   57716 cri.go:89] found id: ""
	I1210 05:55:56.524576   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.524583   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:56.524588   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:56.524644   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:56.547648   57716 cri.go:89] found id: ""
	I1210 05:55:56.547662   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.547669   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:56.547674   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:56.547730   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:56.576863   57716 cri.go:89] found id: ""
	I1210 05:55:56.576876   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.576883   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:56.576895   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:56.576956   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:56.600963   57716 cri.go:89] found id: ""
	I1210 05:55:56.600977   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.600984   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:56.600989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:56.601049   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:56.624726   57716 cri.go:89] found id: ""
	I1210 05:55:56.624739   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.624747   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:56.624755   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:56.624816   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:56.657236   57716 cri.go:89] found id: ""
	I1210 05:55:56.657249   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.657261   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:56.657270   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:56.657280   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:56.697559   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:56.697576   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:56.757986   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:56.758004   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:56.769563   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:56.769579   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:56.830223   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:56.822784   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.823676   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825199   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825498   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.826965   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:56.822784   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.823676   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825199   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825498   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.826965   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:56.830233   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:56.830243   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:59.393208   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:59.403384   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:59.403452   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:59.428722   57716 cri.go:89] found id: ""
	I1210 05:55:59.428749   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.428757   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:59.428763   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:59.428833   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:59.453874   57716 cri.go:89] found id: ""
	I1210 05:55:59.453887   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.453895   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:59.453901   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:59.453962   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:59.478240   57716 cri.go:89] found id: ""
	I1210 05:55:59.478253   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.478260   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:59.478271   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:59.478329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:59.502468   57716 cri.go:89] found id: ""
	I1210 05:55:59.502482   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.502489   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:59.502494   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:59.502554   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:59.526784   57716 cri.go:89] found id: ""
	I1210 05:55:59.526797   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.526804   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:59.526809   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:59.526872   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:59.552473   57716 cri.go:89] found id: ""
	I1210 05:55:59.552486   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.552493   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:59.552499   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:59.552552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:59.576249   57716 cri.go:89] found id: ""
	I1210 05:55:59.576262   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.576269   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:59.576276   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:59.576288   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:59.631147   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:59.631169   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:59.642052   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:59.642067   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:59.721714   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:59.711733   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.712627   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714378   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714692   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.716946   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:59.711733   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.712627   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714378   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714692   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.716946   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:59.721733   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:59.721745   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:59.783216   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:59.783235   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:02.312967   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:02.323213   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:02.323279   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:02.347978   57716 cri.go:89] found id: ""
	I1210 05:56:02.347992   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.348011   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:02.348017   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:02.348073   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:02.372899   57716 cri.go:89] found id: ""
	I1210 05:56:02.372912   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.372920   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:02.372926   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:02.372985   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:02.396971   57716 cri.go:89] found id: ""
	I1210 05:56:02.396985   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.396992   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:02.396997   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:02.397057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:02.422416   57716 cri.go:89] found id: ""
	I1210 05:56:02.422430   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.422437   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:02.422443   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:02.422501   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:02.447977   57716 cri.go:89] found id: ""
	I1210 05:56:02.447990   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.448004   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:02.448009   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:02.448066   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:02.471774   57716 cri.go:89] found id: ""
	I1210 05:56:02.471788   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.471795   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:02.471800   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:02.471857   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:02.496057   57716 cri.go:89] found id: ""
	I1210 05:56:02.496072   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.496079   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:02.496088   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:02.496098   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:02.523576   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:02.523592   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:02.579266   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:02.579296   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:02.590792   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:02.590809   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:02.657064   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:02.648570   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.649296   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651116   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651750   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.653344   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:02.648570   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.649296   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651116   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651750   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.653344   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:02.657075   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:02.657085   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:05.229868   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:05.239953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:05.240012   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:05.264605   57716 cri.go:89] found id: ""
	I1210 05:56:05.264618   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.264626   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:05.264631   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:05.264689   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:05.288264   57716 cri.go:89] found id: ""
	I1210 05:56:05.288277   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.288285   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:05.288290   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:05.288354   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:05.313427   57716 cri.go:89] found id: ""
	I1210 05:56:05.313441   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.313448   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:05.313454   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:05.313510   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:05.344659   57716 cri.go:89] found id: ""
	I1210 05:56:05.344673   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.344680   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:05.344686   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:05.344743   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:05.369600   57716 cri.go:89] found id: ""
	I1210 05:56:05.369614   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.369621   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:05.369626   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:05.369683   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:05.397066   57716 cri.go:89] found id: ""
	I1210 05:56:05.397080   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.397088   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:05.397093   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:05.397150   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:05.422728   57716 cri.go:89] found id: ""
	I1210 05:56:05.422744   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.422751   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:05.422759   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:05.422770   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:05.485204   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:05.477114   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.477952   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479558   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479866   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.481321   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:05.477114   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.477952   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479558   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479866   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.481321   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:05.485215   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:05.485227   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:05.547693   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:05.547712   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:05.580471   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:05.580488   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:05.639350   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:05.639369   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:08.151149   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:08.162270   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:08.162351   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:08.189435   57716 cri.go:89] found id: ""
	I1210 05:56:08.189448   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.189455   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:08.189465   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:08.189530   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:08.218992   57716 cri.go:89] found id: ""
	I1210 05:56:08.219006   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.219031   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:08.219042   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:08.219100   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:08.245141   57716 cri.go:89] found id: ""
	I1210 05:56:08.245153   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.245160   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:08.245165   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:08.245221   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:08.273294   57716 cri.go:89] found id: ""
	I1210 05:56:08.273307   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.273314   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:08.273319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:08.273382   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:08.298396   57716 cri.go:89] found id: ""
	I1210 05:56:08.298410   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.298417   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:08.298422   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:08.298482   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:08.322670   57716 cri.go:89] found id: ""
	I1210 05:56:08.322684   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.322691   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:08.322696   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:08.322753   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:08.347986   57716 cri.go:89] found id: ""
	I1210 05:56:08.348000   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.348007   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:08.348015   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:08.348024   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:08.411052   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:08.411070   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:08.438849   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:08.438865   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:08.496560   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:08.496587   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:08.507905   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:08.507921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:08.573377   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:08.565623   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.566145   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.567826   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.568336   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.569867   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:08.565623   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.566145   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.567826   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.568336   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.569867   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:11.073585   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:11.083689   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:11.083757   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:11.108541   57716 cri.go:89] found id: ""
	I1210 05:56:11.108620   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.108628   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:11.108634   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:11.108694   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:11.134331   57716 cri.go:89] found id: ""
	I1210 05:56:11.134346   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.134353   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:11.134358   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:11.134417   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:11.158615   57716 cri.go:89] found id: ""
	I1210 05:56:11.158628   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.158635   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:11.158640   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:11.158698   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:11.183689   57716 cri.go:89] found id: ""
	I1210 05:56:11.183703   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.183710   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:11.183716   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:11.183775   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:11.207798   57716 cri.go:89] found id: ""
	I1210 05:56:11.207812   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.207819   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:11.207825   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:11.207882   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:11.236712   57716 cri.go:89] found id: ""
	I1210 05:56:11.236726   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.236734   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:11.236739   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:11.236801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:11.260759   57716 cri.go:89] found id: ""
	I1210 05:56:11.260773   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.260780   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:11.260788   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:11.260798   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:11.289769   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:11.289786   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:11.354319   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:11.354343   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:11.365879   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:11.365896   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:11.429322   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:11.420840   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.421615   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.423423   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.424052   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.425736   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:11.420840   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.421615   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.423423   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.424052   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.425736   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:11.429334   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:11.429347   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:13.992257   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:14.005684   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:14.005747   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:14.031213   57716 cri.go:89] found id: ""
	I1210 05:56:14.031233   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.031241   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:14.031246   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:14.031308   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:14.055927   57716 cri.go:89] found id: ""
	I1210 05:56:14.055941   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.055948   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:14.055953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:14.056011   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:14.080687   57716 cri.go:89] found id: ""
	I1210 05:56:14.080700   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.080707   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:14.080712   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:14.080770   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:14.108973   57716 cri.go:89] found id: ""
	I1210 05:56:14.108986   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.108993   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:14.108999   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:14.109057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:14.138949   57716 cri.go:89] found id: ""
	I1210 05:56:14.138963   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.138971   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:14.138976   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:14.139058   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:14.162184   57716 cri.go:89] found id: ""
	I1210 05:56:14.162199   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.162206   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:14.162211   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:14.162267   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:14.186846   57716 cri.go:89] found id: ""
	I1210 05:56:14.186859   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.186866   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:14.186874   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:14.186885   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:14.214982   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:14.214998   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:14.272262   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:14.272279   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:14.283290   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:14.283306   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:14.343519   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:14.335616   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.336321   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338030   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338568   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.340121   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:14.335616   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.336321   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338030   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338568   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.340121   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:14.343530   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:14.343541   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:16.905886   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:16.915932   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:16.915991   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:16.943689   57716 cri.go:89] found id: ""
	I1210 05:56:16.943703   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.943710   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:16.943715   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:16.943772   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:16.971692   57716 cri.go:89] found id: ""
	I1210 05:56:16.971705   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.971712   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:16.971717   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:16.971774   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:16.998705   57716 cri.go:89] found id: ""
	I1210 05:56:16.998721   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.998729   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:16.998734   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:16.998805   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:17.028716   57716 cri.go:89] found id: ""
	I1210 05:56:17.028730   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.028737   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:17.028743   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:17.028810   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:17.056330   57716 cri.go:89] found id: ""
	I1210 05:56:17.056344   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.056351   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:17.056355   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:17.056412   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:17.084606   57716 cri.go:89] found id: ""
	I1210 05:56:17.084620   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.084627   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:17.084633   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:17.084690   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:17.108463   57716 cri.go:89] found id: ""
	I1210 05:56:17.108476   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.108484   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:17.108492   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:17.108502   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:17.119206   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:17.119223   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:17.184513   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:17.176815   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.177383   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.178877   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.179482   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.181206   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:17.176815   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.177383   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.178877   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.179482   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.181206   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:17.184523   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:17.184533   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:17.249050   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:17.249068   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:17.277433   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:17.277448   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:19.835189   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:19.845211   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:19.845270   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:19.869437   57716 cri.go:89] found id: ""
	I1210 05:56:19.869451   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.869457   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:19.869463   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:19.869525   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:19.893666   57716 cri.go:89] found id: ""
	I1210 05:56:19.893680   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.893687   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:19.893691   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:19.893746   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:19.925851   57716 cri.go:89] found id: ""
	I1210 05:56:19.925864   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.925871   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:19.925876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:19.925934   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:19.953268   57716 cri.go:89] found id: ""
	I1210 05:56:19.953283   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.953289   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:19.953295   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:19.953352   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:19.980541   57716 cri.go:89] found id: ""
	I1210 05:56:19.980555   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.980562   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:19.980567   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:19.980629   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:20.014350   57716 cri.go:89] found id: ""
	I1210 05:56:20.014365   57716 logs.go:282] 0 containers: []
	W1210 05:56:20.014383   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:20.014389   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:20.014463   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:20.040904   57716 cri.go:89] found id: ""
	I1210 05:56:20.040918   57716 logs.go:282] 0 containers: []
	W1210 05:56:20.040926   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:20.040933   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:20.040943   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:20.097054   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:20.097072   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:20.108443   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:20.108459   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:20.173764   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:20.164932   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166475   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166965   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168506   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168930   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:20.164932   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166475   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166965   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168506   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168930   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:20.173773   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:20.173784   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:20.235116   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:20.235134   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:22.763516   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:22.773433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:22.773490   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:22.797542   57716 cri.go:89] found id: ""
	I1210 05:56:22.797556   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.797562   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:22.797568   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:22.797622   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:22.821893   57716 cri.go:89] found id: ""
	I1210 05:56:22.821907   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.821915   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:22.821920   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:22.821976   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:22.850542   57716 cri.go:89] found id: ""
	I1210 05:56:22.850557   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.850564   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:22.850569   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:22.850627   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:22.875288   57716 cri.go:89] found id: ""
	I1210 05:56:22.875301   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.875314   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:22.875320   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:22.875376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:22.900725   57716 cri.go:89] found id: ""
	I1210 05:56:22.900739   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.900747   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:22.900752   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:22.900808   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:22.931217   57716 cri.go:89] found id: ""
	I1210 05:56:22.931230   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.931237   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:22.931243   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:22.931309   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:22.963506   57716 cri.go:89] found id: ""
	I1210 05:56:22.963519   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.963525   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:22.963533   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:22.963542   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:23.025625   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:23.025643   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:23.036825   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:23.036841   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:23.100693   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:23.092404   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.093143   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.094913   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.095571   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.097307   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:23.092404   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.093143   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.094913   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.095571   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.097307   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:23.100703   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:23.100715   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:23.160995   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:23.161014   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:25.690455   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:25.700306   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:25.700369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:25.725916   57716 cri.go:89] found id: ""
	I1210 05:56:25.725931   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.725942   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:25.725948   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:25.726009   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:25.749914   57716 cri.go:89] found id: ""
	I1210 05:56:25.749927   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.749935   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:25.749939   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:25.749998   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:25.776070   57716 cri.go:89] found id: ""
	I1210 05:56:25.776083   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.776090   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:25.776095   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:25.776154   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:25.799518   57716 cri.go:89] found id: ""
	I1210 05:56:25.799532   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.799540   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:25.799546   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:25.799608   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:25.822990   57716 cri.go:89] found id: ""
	I1210 05:56:25.823057   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.823064   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:25.823072   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:25.823138   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:25.847416   57716 cri.go:89] found id: ""
	I1210 05:56:25.847430   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.847437   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:25.847442   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:25.847500   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:25.871819   57716 cri.go:89] found id: ""
	I1210 05:56:25.871833   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.871840   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:25.871849   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:25.871861   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:25.882590   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:25.882607   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:25.975908   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:25.961777   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.962673   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967132   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967485   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.972482   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:25.961777   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.962673   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967132   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967485   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.972482   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:25.975918   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:25.975929   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:26.042569   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:26.042588   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:26.070803   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:26.070819   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:28.629575   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:28.639457   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:28.639513   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:28.663811   57716 cri.go:89] found id: ""
	I1210 05:56:28.663824   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.663832   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:28.663837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:28.663892   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:28.688455   57716 cri.go:89] found id: ""
	I1210 05:56:28.688469   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.688476   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:28.688481   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:28.688538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:28.711872   57716 cri.go:89] found id: ""
	I1210 05:56:28.711886   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.711893   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:28.711898   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:28.711955   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:28.736153   57716 cri.go:89] found id: ""
	I1210 05:56:28.736166   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.736173   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:28.736181   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:28.736242   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:28.759991   57716 cri.go:89] found id: ""
	I1210 05:56:28.760011   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.760018   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:28.760023   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:28.760080   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:28.784928   57716 cri.go:89] found id: ""
	I1210 05:56:28.784942   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.784949   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:28.784955   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:28.785011   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:28.808330   57716 cri.go:89] found id: ""
	I1210 05:56:28.808343   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.808350   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:28.808359   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:28.808368   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:28.864140   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:28.864158   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:28.874997   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:28.875030   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:28.946271   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:28.938223   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.939058   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.940712   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.941043   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.942516   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:28.938223   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.939058   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.940712   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.941043   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.942516   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:28.946281   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:28.946291   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:29.015729   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:29.015750   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:31.546248   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:31.557000   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:31.557057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:31.581315   57716 cri.go:89] found id: ""
	I1210 05:56:31.581329   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.581336   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:31.581342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:31.581397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:31.606297   57716 cri.go:89] found id: ""
	I1210 05:56:31.606312   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.606327   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:31.606332   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:31.606389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:31.630600   57716 cri.go:89] found id: ""
	I1210 05:56:31.630614   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.630621   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:31.630627   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:31.630684   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:31.658929   57716 cri.go:89] found id: ""
	I1210 05:56:31.658942   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.658949   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:31.658955   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:31.659042   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:31.684421   57716 cri.go:89] found id: ""
	I1210 05:56:31.684434   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.684441   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:31.684456   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:31.684529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:31.708593   57716 cri.go:89] found id: ""
	I1210 05:56:31.708607   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.708614   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:31.708620   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:31.708678   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:31.733389   57716 cri.go:89] found id: ""
	I1210 05:56:31.733403   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.733411   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:31.733419   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:31.733429   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:31.762157   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:31.762171   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:31.818205   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:31.818222   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:31.829166   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:31.829182   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:31.894733   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:31.886837   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.887553   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889191   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889735   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.891344   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:31.886837   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.887553   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889191   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889735   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.891344   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:31.894745   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:31.894756   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:34.466636   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:34.477387   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:34.477462   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:34.508975   57716 cri.go:89] found id: ""
	I1210 05:56:34.508989   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.508996   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:34.509002   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:34.509058   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:34.536397   57716 cri.go:89] found id: ""
	I1210 05:56:34.536410   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.536417   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:34.536424   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:34.536482   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:34.560872   57716 cri.go:89] found id: ""
	I1210 05:56:34.560885   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.560892   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:34.560898   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:34.560959   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:34.585436   57716 cri.go:89] found id: ""
	I1210 05:56:34.585450   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.585457   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:34.585463   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:34.585520   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:34.609983   57716 cri.go:89] found id: ""
	I1210 05:56:34.609997   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.610004   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:34.610010   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:34.610065   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:34.634652   57716 cri.go:89] found id: ""
	I1210 05:56:34.634666   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.634674   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:34.634679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:34.634737   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:34.660417   57716 cri.go:89] found id: ""
	I1210 05:56:34.660431   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.660438   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:34.660446   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:34.660468   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:34.715849   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:34.715870   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:34.726672   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:34.726687   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:34.788897   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:34.781210   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.781759   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783378   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783973   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.785508   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:34.781210   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.781759   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783378   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783973   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.785508   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:34.788907   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:34.788917   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:34.850671   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:34.850690   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:37.378067   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:37.388018   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:37.388079   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:37.415590   57716 cri.go:89] found id: ""
	I1210 05:56:37.415604   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.415611   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:37.415617   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:37.415679   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:37.443166   57716 cri.go:89] found id: ""
	I1210 05:56:37.443179   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.443186   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:37.443192   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:37.443248   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:37.466187   57716 cri.go:89] found id: ""
	I1210 05:56:37.466201   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.466208   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:37.466214   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:37.466271   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:37.492297   57716 cri.go:89] found id: ""
	I1210 05:56:37.492321   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.492329   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:37.492335   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:37.492389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:37.515998   57716 cri.go:89] found id: ""
	I1210 05:56:37.516012   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.516018   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:37.516024   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:37.516083   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:37.540490   57716 cri.go:89] found id: ""
	I1210 05:56:37.540503   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.540510   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:37.540516   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:37.540576   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:37.565092   57716 cri.go:89] found id: ""
	I1210 05:56:37.565105   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.565111   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:37.565119   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:37.565137   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:37.625814   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:37.625837   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:37.637078   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:37.637104   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:37.697146   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:37.689936   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.690349   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691533   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691938   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.693652   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:37.689936   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.690349   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691533   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691938   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.693652   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:37.697156   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:37.697182   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:37.757019   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:37.757038   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:40.287595   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:40.298582   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:40.298641   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:40.322470   57716 cri.go:89] found id: ""
	I1210 05:56:40.322484   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.322491   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:40.322497   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:40.322552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:40.346764   57716 cri.go:89] found id: ""
	I1210 05:56:40.346778   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.346785   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:40.346790   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:40.346851   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:40.373286   57716 cri.go:89] found id: ""
	I1210 05:56:40.373300   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.373307   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:40.373313   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:40.373372   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:40.402348   57716 cri.go:89] found id: ""
	I1210 05:56:40.402361   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.402368   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:40.402373   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:40.402428   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:40.427030   57716 cri.go:89] found id: ""
	I1210 05:56:40.427044   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.427052   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:40.427057   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:40.427117   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:40.451451   57716 cri.go:89] found id: ""
	I1210 05:56:40.451478   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.451485   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:40.451491   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:40.451554   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:40.480083   57716 cri.go:89] found id: ""
	I1210 05:56:40.480100   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.480106   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:40.480114   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:40.480124   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:40.490894   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:40.490909   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:40.556681   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:40.549171   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.549844   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551479   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551814   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.553287   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:40.549171   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.549844   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551479   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551814   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.553287   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:40.556692   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:40.556702   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:40.619424   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:40.619443   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:40.652592   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:40.652608   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:43.210686   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:43.221608   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:43.221673   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:43.249950   57716 cri.go:89] found id: ""
	I1210 05:56:43.249964   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.249971   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:43.249977   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:43.250038   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:43.276671   57716 cri.go:89] found id: ""
	I1210 05:56:43.276685   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.276692   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:43.276697   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:43.276752   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:43.301078   57716 cri.go:89] found id: ""
	I1210 05:56:43.301092   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.301099   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:43.301105   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:43.301166   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:43.325712   57716 cri.go:89] found id: ""
	I1210 05:56:43.325725   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.325732   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:43.325753   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:43.325807   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:43.350013   57716 cri.go:89] found id: ""
	I1210 05:56:43.350027   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.350034   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:43.350039   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:43.350095   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:43.374239   57716 cri.go:89] found id: ""
	I1210 05:56:43.374253   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.374259   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:43.374265   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:43.374325   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:43.398684   57716 cri.go:89] found id: ""
	I1210 05:56:43.398697   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.398704   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:43.398713   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:43.398723   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:43.429674   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:43.429692   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:43.486606   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:43.486624   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:43.497851   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:43.497867   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:43.564988   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:43.556980   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.557595   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559286   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559906   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.561769   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:43.556980   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.557595   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559286   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559906   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.561769   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:43.565001   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:43.565011   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:46.128659   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:46.139799   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:46.139857   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:46.169381   57716 cri.go:89] found id: ""
	I1210 05:56:46.169395   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.169402   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:46.169408   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:46.169468   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:46.198882   57716 cri.go:89] found id: ""
	I1210 05:56:46.198896   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.198903   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:46.198909   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:46.198966   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:46.234049   57716 cri.go:89] found id: ""
	I1210 05:56:46.234064   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.234072   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:46.234077   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:46.234134   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:46.260031   57716 cri.go:89] found id: ""
	I1210 05:56:46.260044   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.260051   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:46.260057   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:46.260112   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:46.284339   57716 cri.go:89] found id: ""
	I1210 05:56:46.284353   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.284361   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:46.284366   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:46.284425   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:46.309943   57716 cri.go:89] found id: ""
	I1210 05:56:46.309957   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.309964   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:46.309970   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:46.310026   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:46.335200   57716 cri.go:89] found id: ""
	I1210 05:56:46.335215   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.335222   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:46.335235   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:46.335247   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:46.391563   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:46.391580   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:46.403485   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:46.403501   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:46.469778   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:46.461822   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.462325   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464066   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464772   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.466293   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:46.461822   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.462325   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464066   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464772   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.466293   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:46.469787   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:46.469798   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:46.533492   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:46.533510   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:49.061494   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:49.071430   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:49.071494   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:49.094941   57716 cri.go:89] found id: ""
	I1210 05:56:49.094961   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.094969   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:49.094974   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:49.095053   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:49.119980   57716 cri.go:89] found id: ""
	I1210 05:56:49.119994   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.120001   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:49.120006   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:49.120061   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:49.149253   57716 cri.go:89] found id: ""
	I1210 05:56:49.149267   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.149275   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:49.149280   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:49.149339   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:49.190394   57716 cri.go:89] found id: ""
	I1210 05:56:49.190407   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.190414   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:49.190419   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:49.190474   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:49.226315   57716 cri.go:89] found id: ""
	I1210 05:56:49.226328   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.226335   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:49.226340   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:49.226398   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:49.253703   57716 cri.go:89] found id: ""
	I1210 05:56:49.253716   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.253723   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:49.253729   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:49.253793   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:49.278595   57716 cri.go:89] found id: ""
	I1210 05:56:49.278609   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.278616   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:49.278633   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:49.278643   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:49.339769   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:49.339786   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:49.368179   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:49.368196   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:49.424135   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:49.424152   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:49.435251   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:49.435277   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:49.499081   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:49.491345   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.492104   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.493573   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.494053   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.495641   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:49.491345   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.492104   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.493573   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.494053   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.495641   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:52.000764   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:52.011936   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:52.011997   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:52.044999   57716 cri.go:89] found id: ""
	I1210 05:56:52.045013   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.045020   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:52.045026   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:52.045084   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:52.069248   57716 cri.go:89] found id: ""
	I1210 05:56:52.069262   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.069269   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:52.069274   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:52.069340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:52.098397   57716 cri.go:89] found id: ""
	I1210 05:56:52.098410   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.098428   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:52.098435   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:52.098500   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:52.126868   57716 cri.go:89] found id: ""
	I1210 05:56:52.126887   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.126905   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:52.126910   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:52.126965   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:52.150645   57716 cri.go:89] found id: ""
	I1210 05:56:52.150658   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.150666   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:52.150681   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:52.150740   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:52.186283   57716 cri.go:89] found id: ""
	I1210 05:56:52.186296   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.186304   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:52.186318   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:52.186374   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:52.218438   57716 cri.go:89] found id: ""
	I1210 05:56:52.218451   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.218458   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:52.218476   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:52.218486   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:52.281011   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:52.273152   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.273845   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.275592   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.276072   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.277623   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:52.273152   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.273845   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.275592   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.276072   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.277623   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:52.281021   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:52.281032   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:52.342042   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:52.342058   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:52.373121   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:52.373136   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:52.428970   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:52.428987   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:54.940399   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:54.950167   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:54.950228   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:54.974172   57716 cri.go:89] found id: ""
	I1210 05:56:54.974186   57716 logs.go:282] 0 containers: []
	W1210 05:56:54.974193   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:54.974199   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:54.974257   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:55.008246   57716 cri.go:89] found id: ""
	I1210 05:56:55.008262   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.008270   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:55.008275   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:55.008340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:55.034655   57716 cri.go:89] found id: ""
	I1210 05:56:55.034669   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.034676   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:55.034682   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:55.034741   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:55.063972   57716 cri.go:89] found id: ""
	I1210 05:56:55.063986   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.063994   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:55.063999   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:55.064057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:55.090263   57716 cri.go:89] found id: ""
	I1210 05:56:55.090275   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.090292   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:55.090298   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:55.090353   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:55.113407   57716 cri.go:89] found id: ""
	I1210 05:56:55.113421   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.113428   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:55.113433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:55.113491   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:55.140991   57716 cri.go:89] found id: ""
	I1210 05:56:55.141010   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.141018   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:55.141025   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:55.141036   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:55.201731   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:55.201749   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:55.218256   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:55.218270   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:55.290800   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:55.282984   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.283573   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285214   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285730   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.287308   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:55.282984   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.283573   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285214   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285730   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.287308   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:55.290811   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:55.290831   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:55.355200   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:55.355218   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:57.881741   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:57.891584   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:57.891646   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:57.918310   57716 cri.go:89] found id: ""
	I1210 05:56:57.918323   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.918330   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:57.918336   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:57.918391   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:57.942318   57716 cri.go:89] found id: ""
	I1210 05:56:57.942331   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.942338   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:57.942344   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:57.942402   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:57.966253   57716 cri.go:89] found id: ""
	I1210 05:56:57.966267   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.966274   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:57.966279   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:57.966338   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:57.990324   57716 cri.go:89] found id: ""
	I1210 05:56:57.990338   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.990346   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:57.990351   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:57.990414   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:58.021444   57716 cri.go:89] found id: ""
	I1210 05:56:58.021458   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.021466   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:58.021471   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:58.021529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:58.046661   57716 cri.go:89] found id: ""
	I1210 05:56:58.046680   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.046688   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:58.046699   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:58.046767   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:58.071123   57716 cri.go:89] found id: ""
	I1210 05:56:58.071137   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.071145   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:58.071153   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:58.071162   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:58.135978   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:58.135998   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:58.167638   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:58.167656   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:58.232589   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:58.232610   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:58.244347   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:58.244363   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:58.304989   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:58.297197   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.297898   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.299609   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.300132   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.301733   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:58.297197   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.297898   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.299609   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.300132   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.301733   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:00.806679   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:00.816733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:00.816793   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:00.845594   57716 cri.go:89] found id: ""
	I1210 05:57:00.845608   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.845615   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:00.845622   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:00.845682   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:00.880377   57716 cri.go:89] found id: ""
	I1210 05:57:00.880391   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.880399   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:00.880405   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:00.880463   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:00.904970   57716 cri.go:89] found id: ""
	I1210 05:57:00.904990   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.904997   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:00.905003   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:00.905063   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:00.933169   57716 cri.go:89] found id: ""
	I1210 05:57:00.933183   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.933191   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:00.933196   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:00.933255   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:00.962218   57716 cri.go:89] found id: ""
	I1210 05:57:00.962231   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.962238   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:00.962244   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:00.962301   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:00.987794   57716 cri.go:89] found id: ""
	I1210 05:57:00.987807   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.987814   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:00.987820   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:00.987879   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:01.014287   57716 cri.go:89] found id: ""
	I1210 05:57:01.014302   57716 logs.go:282] 0 containers: []
	W1210 05:57:01.014309   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:01.014318   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:01.014328   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:01.045925   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:01.045941   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:01.102696   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:01.102714   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:01.114077   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:01.114092   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:01.201703   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:01.177406   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.182687   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.186518   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.195186   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.196003   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:01.177406   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.182687   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.186518   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.195186   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.196003   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:01.201726   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:01.201738   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:03.774227   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:03.784265   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:03.784325   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:03.809259   57716 cri.go:89] found id: ""
	I1210 05:57:03.809273   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.809280   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:03.809285   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:03.809347   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:03.835314   57716 cri.go:89] found id: ""
	I1210 05:57:03.835329   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.835336   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:03.835342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:03.835401   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:03.860149   57716 cri.go:89] found id: ""
	I1210 05:57:03.860163   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.860170   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:03.860175   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:03.860243   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:03.886583   57716 cri.go:89] found id: ""
	I1210 05:57:03.886597   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.886604   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:03.886610   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:03.886669   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:03.915441   57716 cri.go:89] found id: ""
	I1210 05:57:03.915454   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.915462   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:03.915467   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:03.915528   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:03.939994   57716 cri.go:89] found id: ""
	I1210 05:57:03.940008   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.940015   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:03.940021   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:03.944397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:03.970729   57716 cri.go:89] found id: ""
	I1210 05:57:03.970742   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.970749   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:03.970757   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:03.970768   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:04.027596   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:04.027617   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:04.039557   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:04.039578   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:04.105314   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:04.097441   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.098313   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.099991   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.100340   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.101876   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:04.097441   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.098313   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.099991   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.100340   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.101876   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:04.105325   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:04.105336   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:04.167908   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:04.167927   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:06.703048   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:06.712953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:06.713014   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:06.740745   57716 cri.go:89] found id: ""
	I1210 05:57:06.740759   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.740766   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:06.740771   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:06.740826   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:06.764572   57716 cri.go:89] found id: ""
	I1210 05:57:06.764585   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.764592   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:06.764598   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:06.764654   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:06.792403   57716 cri.go:89] found id: ""
	I1210 05:57:06.792418   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.792425   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:06.792430   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:06.792488   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:06.816569   57716 cri.go:89] found id: ""
	I1210 05:57:06.816583   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.816591   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:06.816596   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:06.816659   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:06.841104   57716 cri.go:89] found id: ""
	I1210 05:57:06.841118   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.841125   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:06.841131   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:06.841191   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:06.863923   57716 cri.go:89] found id: ""
	I1210 05:57:06.863936   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.863943   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:06.863949   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:06.864004   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:06.889078   57716 cri.go:89] found id: ""
	I1210 05:57:06.889091   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.889099   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:06.889106   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:06.889116   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:06.943842   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:06.943863   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:06.954461   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:06.954477   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:07.025823   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:07.017208   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.017859   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.019473   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.020044   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.021782   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:07.017208   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.017859   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.019473   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.020044   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.021782   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:07.025833   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:07.025847   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:07.087136   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:07.087156   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:09.618129   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:09.627876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:09.627939   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:09.655385   57716 cri.go:89] found id: ""
	I1210 05:57:09.655399   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.655406   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:09.655411   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:09.655476   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:09.678439   57716 cri.go:89] found id: ""
	I1210 05:57:09.678453   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.678460   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:09.678466   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:09.678521   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:09.708049   57716 cri.go:89] found id: ""
	I1210 05:57:09.708063   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.708071   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:09.708076   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:09.708134   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:09.731272   57716 cri.go:89] found id: ""
	I1210 05:57:09.731286   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.731293   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:09.731298   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:09.731355   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:09.756542   57716 cri.go:89] found id: ""
	I1210 05:57:09.756556   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.756563   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:09.756569   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:09.756625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:09.782376   57716 cri.go:89] found id: ""
	I1210 05:57:09.782389   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.782396   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:09.782402   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:09.782469   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:09.806766   57716 cri.go:89] found id: ""
	I1210 05:57:09.806780   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.806787   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:09.806795   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:09.806806   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:09.817591   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:09.817607   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:09.877883   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:09.869907   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.870472   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872036   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872545   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.874081   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:09.869907   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.870472   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872036   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872545   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.874081   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:09.877897   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:09.877907   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:09.939799   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:09.939817   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:09.972539   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:09.972555   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:12.528080   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:12.538052   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:12.538112   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:12.561407   57716 cri.go:89] found id: ""
	I1210 05:57:12.561421   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.561429   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:12.561434   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:12.561504   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:12.587323   57716 cri.go:89] found id: ""
	I1210 05:57:12.587337   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.587344   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:12.587349   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:12.587407   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:12.611528   57716 cri.go:89] found id: ""
	I1210 05:57:12.611542   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.611550   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:12.611555   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:12.611613   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:12.639252   57716 cri.go:89] found id: ""
	I1210 05:57:12.639266   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.639273   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:12.639278   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:12.639340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:12.662845   57716 cri.go:89] found id: ""
	I1210 05:57:12.662858   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.662865   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:12.662871   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:12.662924   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:12.687312   57716 cri.go:89] found id: ""
	I1210 05:57:12.687325   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.687332   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:12.687338   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:12.687410   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:12.712443   57716 cri.go:89] found id: ""
	I1210 05:57:12.712456   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.712463   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:12.712471   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:12.712484   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:12.772312   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:12.772330   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:12.800589   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:12.800611   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:12.856815   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:12.856832   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:12.868411   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:12.868427   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:12.938613   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:12.928160   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.928723   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.932890   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.933536   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.935292   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:12.928160   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.928723   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.932890   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.933536   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.935292   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:15.439137   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:15.449933   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:15.450005   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:15.483755   57716 cri.go:89] found id: ""
	I1210 05:57:15.483769   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.483775   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:15.483781   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:15.483837   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:15.507520   57716 cri.go:89] found id: ""
	I1210 05:57:15.507534   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.507542   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:15.507547   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:15.507605   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:15.534553   57716 cri.go:89] found id: ""
	I1210 05:57:15.534566   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.534573   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:15.534578   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:15.534635   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:15.559360   57716 cri.go:89] found id: ""
	I1210 05:57:15.559374   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.559381   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:15.559386   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:15.559443   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:15.584591   57716 cri.go:89] found id: ""
	I1210 05:57:15.584607   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.584614   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:15.584619   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:15.584677   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:15.613451   57716 cri.go:89] found id: ""
	I1210 05:57:15.613471   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.613479   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:15.613485   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:15.613607   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:15.638843   57716 cri.go:89] found id: ""
	I1210 05:57:15.638858   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.638865   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:15.638874   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:15.638884   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:15.694185   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:15.694203   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:15.704709   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:15.704725   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:15.769534   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:15.761459   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.762286   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.763956   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.764609   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.766176   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:15.761459   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.762286   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.763956   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.764609   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.766176   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:15.769543   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:15.769556   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:15.830240   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:15.830258   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:18.356935   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:18.366837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:18.366896   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:18.391280   57716 cri.go:89] found id: ""
	I1210 05:57:18.391294   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.391301   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:18.391308   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:18.391376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:18.421532   57716 cri.go:89] found id: ""
	I1210 05:57:18.421546   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.421553   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:18.421558   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:18.421625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:18.455057   57716 cri.go:89] found id: ""
	I1210 05:57:18.455071   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.455078   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:18.455083   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:18.455153   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:18.488121   57716 cri.go:89] found id: ""
	I1210 05:57:18.488135   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.488142   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:18.488148   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:18.488210   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:18.511864   57716 cri.go:89] found id: ""
	I1210 05:57:18.511878   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.511886   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:18.511905   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:18.511966   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:18.535922   57716 cri.go:89] found id: ""
	I1210 05:57:18.535936   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.535957   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:18.535963   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:18.536029   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:18.560287   57716 cri.go:89] found id: ""
	I1210 05:57:18.560302   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.560309   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:18.560317   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:18.560328   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:18.627753   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:18.619860   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.620508   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622357   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622888   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.624346   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:18.619860   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.620508   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622357   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622888   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.624346   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:18.627764   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:18.627776   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:18.688471   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:18.688489   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:18.719143   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:18.719159   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:18.774435   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:18.774453   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:21.285722   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:21.295523   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:21.295582   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:21.322675   57716 cri.go:89] found id: ""
	I1210 05:57:21.322688   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.322696   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:21.322701   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:21.322758   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:21.347136   57716 cri.go:89] found id: ""
	I1210 05:57:21.347150   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.347157   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:21.347162   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:21.347219   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:21.372204   57716 cri.go:89] found id: ""
	I1210 05:57:21.372217   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.372224   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:21.372229   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:21.372283   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:21.395417   57716 cri.go:89] found id: ""
	I1210 05:57:21.395431   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.395438   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:21.395443   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:21.395515   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:21.440154   57716 cri.go:89] found id: ""
	I1210 05:57:21.440167   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.440174   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:21.440179   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:21.440240   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:21.473140   57716 cri.go:89] found id: ""
	I1210 05:57:21.473154   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.473166   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:21.473172   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:21.473227   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:21.501607   57716 cri.go:89] found id: ""
	I1210 05:57:21.501630   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.501638   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:21.501646   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:21.501657   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:21.534381   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:21.534397   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:21.591435   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:21.591454   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:21.602570   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:21.602586   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:21.665543   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:21.656612   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.657173   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659237   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659598   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.661250   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:21.656612   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.657173   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659237   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659598   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.661250   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:21.665553   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:21.665564   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:24.232360   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:24.242545   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:24.242605   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:24.268962   57716 cri.go:89] found id: ""
	I1210 05:57:24.268976   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.268983   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:24.268989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:24.269051   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:24.293625   57716 cri.go:89] found id: ""
	I1210 05:57:24.293638   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.293645   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:24.293650   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:24.293706   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:24.323101   57716 cri.go:89] found id: ""
	I1210 05:57:24.323115   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.323122   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:24.323127   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:24.323184   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:24.352417   57716 cri.go:89] found id: ""
	I1210 05:57:24.352431   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.352442   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:24.352448   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:24.352506   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:24.377825   57716 cri.go:89] found id: ""
	I1210 05:57:24.377839   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.377846   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:24.377851   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:24.377907   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:24.401476   57716 cri.go:89] found id: ""
	I1210 05:57:24.401490   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.401497   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:24.401502   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:24.401560   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:24.430784   57716 cri.go:89] found id: ""
	I1210 05:57:24.430798   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.430805   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:24.430813   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:24.430826   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:24.496086   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:24.496105   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:24.508163   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:24.508178   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:24.572343   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:24.563972   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.564594   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.566433   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.567204   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.568973   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:24.563972   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.564594   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.566433   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.567204   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.568973   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:24.572354   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:24.572365   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:24.634266   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:24.634284   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:27.162032   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:27.171692   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:27.171751   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:27.195293   57716 cri.go:89] found id: ""
	I1210 05:57:27.195306   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.195313   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:27.195319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:27.195375   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:27.223719   57716 cri.go:89] found id: ""
	I1210 05:57:27.223733   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.223741   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:27.223746   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:27.223805   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:27.249635   57716 cri.go:89] found id: ""
	I1210 05:57:27.249648   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.249655   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:27.249661   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:27.249718   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:27.274420   57716 cri.go:89] found id: ""
	I1210 05:57:27.274434   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.274443   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:27.274448   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:27.274515   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:27.302747   57716 cri.go:89] found id: ""
	I1210 05:57:27.302760   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.302777   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:27.302782   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:27.302842   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:27.327624   57716 cri.go:89] found id: ""
	I1210 05:57:27.327638   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.327645   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:27.327650   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:27.327710   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:27.351138   57716 cri.go:89] found id: ""
	I1210 05:57:27.351152   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.351159   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:27.351168   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:27.351179   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:27.416428   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:27.416448   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:27.458729   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:27.458746   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:27.517941   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:27.517959   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:27.528443   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:27.528459   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:27.592381   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:27.584705   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.585249   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.586673   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.587168   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.588572   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:27.584705   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.585249   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.586673   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.587168   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.588572   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:30.094042   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:30.104609   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:30.104685   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:30.131255   57716 cri.go:89] found id: ""
	I1210 05:57:30.131270   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.131277   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:30.131283   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:30.131348   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:30.160477   57716 cri.go:89] found id: ""
	I1210 05:57:30.160491   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.160498   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:30.160503   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:30.160562   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:30.186824   57716 cri.go:89] found id: ""
	I1210 05:57:30.186837   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.186845   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:30.186850   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:30.186910   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:30.212870   57716 cri.go:89] found id: ""
	I1210 05:57:30.212885   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.212892   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:30.212899   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:30.212957   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:30.238085   57716 cri.go:89] found id: ""
	I1210 05:57:30.238098   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.238105   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:30.238111   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:30.238169   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:30.264614   57716 cri.go:89] found id: ""
	I1210 05:57:30.264628   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.264635   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:30.264641   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:30.264697   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:30.292801   57716 cri.go:89] found id: ""
	I1210 05:57:30.292816   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.292823   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:30.292831   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:30.292841   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:30.324527   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:30.324543   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:30.382130   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:30.382156   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:30.392903   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:30.392921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:30.479224   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:30.470442   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.471725   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.473752   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.474178   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.475815   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:30.470442   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.471725   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.473752   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.474178   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.475815   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:30.479235   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:30.479257   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:33.043979   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:33.054086   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:33.054144   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:33.079719   57716 cri.go:89] found id: ""
	I1210 05:57:33.079733   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.079740   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:33.079746   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:33.079804   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:33.109000   57716 cri.go:89] found id: ""
	I1210 05:57:33.109013   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.109020   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:33.109026   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:33.109083   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:33.134184   57716 cri.go:89] found id: ""
	I1210 05:57:33.134198   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.134206   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:33.134213   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:33.134275   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:33.158142   57716 cri.go:89] found id: ""
	I1210 05:57:33.158155   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.158162   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:33.158168   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:33.158253   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:33.181293   57716 cri.go:89] found id: ""
	I1210 05:57:33.181306   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.181313   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:33.181319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:33.181376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:33.206025   57716 cri.go:89] found id: ""
	I1210 05:57:33.206040   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.206047   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:33.206052   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:33.206149   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:33.230253   57716 cri.go:89] found id: ""
	I1210 05:57:33.230267   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.230275   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:33.230283   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:33.230293   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:33.292011   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:33.292028   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:33.318004   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:33.318019   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:33.377256   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:33.377273   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:33.387928   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:33.387943   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:33.461753   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:33.453954   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.454800   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456253   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456768   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.458350   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:33.453954   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.454800   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456253   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456768   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.458350   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:35.962013   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:35.972548   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:35.972622   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:36.000855   57716 cri.go:89] found id: ""
	I1210 05:57:36.000870   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.000880   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:36.000900   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:36.000977   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:36.029136   57716 cri.go:89] found id: ""
	I1210 05:57:36.029151   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.029158   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:36.029164   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:36.029228   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:36.054512   57716 cri.go:89] found id: ""
	I1210 05:57:36.054525   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.054533   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:36.054538   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:36.054597   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:36.080508   57716 cri.go:89] found id: ""
	I1210 05:57:36.080522   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.080529   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:36.080535   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:36.080594   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:36.108590   57716 cri.go:89] found id: ""
	I1210 05:57:36.108604   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.108611   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:36.108616   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:36.108684   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:36.137690   57716 cri.go:89] found id: ""
	I1210 05:57:36.137704   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.137711   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:36.137716   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:36.137777   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:36.164307   57716 cri.go:89] found id: ""
	I1210 05:57:36.164321   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.164328   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:36.164335   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:36.164345   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:36.219816   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:36.219833   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:36.231171   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:36.231187   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:36.294059   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:36.285785   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.286547   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288109   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288462   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.290084   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:36.285785   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.286547   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288109   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288462   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.290084   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:36.294068   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:36.294078   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:36.358593   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:36.358612   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:38.888296   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:38.898447   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:38.898505   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:38.925123   57716 cri.go:89] found id: ""
	I1210 05:57:38.925137   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.925144   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:38.925150   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:38.925210   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:38.949713   57716 cri.go:89] found id: ""
	I1210 05:57:38.949727   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.949734   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:38.949739   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:38.949797   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:38.974867   57716 cri.go:89] found id: ""
	I1210 05:57:38.974881   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.974888   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:38.974893   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:38.974949   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:39.008214   57716 cri.go:89] found id: ""
	I1210 05:57:39.008228   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.008235   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:39.008240   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:39.008300   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:39.033316   57716 cri.go:89] found id: ""
	I1210 05:57:39.033330   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.033342   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:39.033347   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:39.033405   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:39.057634   57716 cri.go:89] found id: ""
	I1210 05:57:39.057648   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.057655   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:39.057660   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:39.057719   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:39.082101   57716 cri.go:89] found id: ""
	I1210 05:57:39.082115   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.082125   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:39.082133   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:39.082143   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:39.144897   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:39.137033   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.137582   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139164   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139565   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.141172   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:39.137033   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.137582   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139164   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139565   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.141172   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:39.144907   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:39.144920   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:39.209520   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:39.209538   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:39.239106   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:39.239121   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:39.294711   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:39.294728   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:41.805411   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:41.814952   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:41.815027   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:41.838919   57716 cri.go:89] found id: ""
	I1210 05:57:41.838933   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.838940   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:41.838946   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:41.839004   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:41.865368   57716 cri.go:89] found id: ""
	I1210 05:57:41.865382   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.865389   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:41.865394   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:41.865452   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:41.889411   57716 cri.go:89] found id: ""
	I1210 05:57:41.889424   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.889431   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:41.889436   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:41.889521   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:41.915079   57716 cri.go:89] found id: ""
	I1210 05:57:41.915093   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.915101   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:41.915110   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:41.915173   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:41.940274   57716 cri.go:89] found id: ""
	I1210 05:57:41.940288   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.940295   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:41.940301   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:41.940360   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:41.969301   57716 cri.go:89] found id: ""
	I1210 05:57:41.969314   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.969321   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:41.969329   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:41.969387   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:41.993086   57716 cri.go:89] found id: ""
	I1210 05:57:41.993100   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.993108   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:41.993116   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:41.993127   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:42.006335   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:42.006357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:42.077276   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:42.067659   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.069125   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.070001   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071203   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071880   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:42.067659   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.069125   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.070001   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071203   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071880   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:42.077290   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:42.077302   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:42.143212   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:42.143248   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:42.179140   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:42.179158   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:44.752413   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:44.762150   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:44.762207   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:44.791897   57716 cri.go:89] found id: ""
	I1210 05:57:44.791911   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.791918   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:44.791924   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:44.791983   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:44.815813   57716 cri.go:89] found id: ""
	I1210 05:57:44.815827   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.815834   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:44.815839   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:44.815894   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:44.839318   57716 cri.go:89] found id: ""
	I1210 05:57:44.839331   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.839337   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:44.839342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:44.839399   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:44.866822   57716 cri.go:89] found id: ""
	I1210 05:57:44.866835   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.866842   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:44.866848   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:44.866904   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:44.892455   57716 cri.go:89] found id: ""
	I1210 05:57:44.892469   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.892476   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:44.892481   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:44.892536   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:44.920574   57716 cri.go:89] found id: ""
	I1210 05:57:44.920588   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.920596   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:44.920602   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:44.920663   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:44.947951   57716 cri.go:89] found id: ""
	I1210 05:57:44.947965   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.947971   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:44.947979   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:44.947988   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:45.005480   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:45.005501   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:45.022560   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:45.022578   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:45.142523   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:45.129527   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.130054   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.132621   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.134289   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.135580   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:45.129527   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.130054   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.132621   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.134289   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.135580   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:45.142534   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:45.142550   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:45.216088   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:45.216135   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:47.759715   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:47.769555   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:47.769615   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:47.793943   57716 cri.go:89] found id: ""
	I1210 05:57:47.793957   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.793964   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:47.793969   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:47.794039   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:47.818334   57716 cri.go:89] found id: ""
	I1210 05:57:47.818348   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.818355   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:47.818360   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:47.818417   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:47.842582   57716 cri.go:89] found id: ""
	I1210 05:57:47.842599   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.842617   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:47.842623   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:47.842689   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:47.868471   57716 cri.go:89] found id: ""
	I1210 05:57:47.868485   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.868492   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:47.868498   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:47.868559   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:47.897381   57716 cri.go:89] found id: ""
	I1210 05:57:47.897394   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.897401   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:47.897416   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:47.897473   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:47.920386   57716 cri.go:89] found id: ""
	I1210 05:57:47.920400   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.920407   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:47.920412   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:47.920474   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:47.947866   57716 cri.go:89] found id: ""
	I1210 05:57:47.947879   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.947886   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:47.947894   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:47.947904   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:48.008844   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:48.008863   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:48.038885   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:48.038903   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:48.095592   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:48.095610   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:48.107140   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:48.107155   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:48.171340   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:48.162734   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.163476   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165210   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165663   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.167242   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:48.162734   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.163476   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165210   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165663   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.167242   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:50.672091   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:50.683391   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:50.683451   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:50.711296   57716 cri.go:89] found id: ""
	I1210 05:57:50.711311   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.711319   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:50.711327   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:50.711382   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:50.740763   57716 cri.go:89] found id: ""
	I1210 05:57:50.740777   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.740785   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:50.740790   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:50.740853   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:50.772079   57716 cri.go:89] found id: ""
	I1210 05:57:50.772093   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.772111   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:50.772117   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:50.772184   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:50.800962   57716 cri.go:89] found id: ""
	I1210 05:57:50.800975   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.800982   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:50.800988   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:50.801044   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:50.825974   57716 cri.go:89] found id: ""
	I1210 05:57:50.825993   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.826000   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:50.826005   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:50.826061   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:50.854343   57716 cri.go:89] found id: ""
	I1210 05:57:50.854356   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.854364   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:50.854369   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:50.854426   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:50.878560   57716 cri.go:89] found id: ""
	I1210 05:57:50.878573   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.878581   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:50.878599   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:50.878609   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:50.906006   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:50.906022   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:50.961851   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:50.961869   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:50.973152   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:50.973171   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:51.044678   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:51.036912   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.037431   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039082   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039573   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.041151   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:51.036912   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.037431   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039082   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039573   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.041151   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:51.044689   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:51.044699   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:53.606481   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:53.616567   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:53.616625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:53.641012   57716 cri.go:89] found id: ""
	I1210 05:57:53.641025   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.641031   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:53.641037   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:53.641092   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:53.673275   57716 cri.go:89] found id: ""
	I1210 05:57:53.673290   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.673307   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:53.673313   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:53.673369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:53.709276   57716 cri.go:89] found id: ""
	I1210 05:57:53.709291   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.709298   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:53.709302   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:53.709369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:53.739332   57716 cri.go:89] found id: ""
	I1210 05:57:53.739346   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.739353   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:53.739358   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:53.739415   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:53.764637   57716 cri.go:89] found id: ""
	I1210 05:57:53.764650   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.764657   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:53.764662   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:53.764717   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:53.793424   57716 cri.go:89] found id: ""
	I1210 05:57:53.793438   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.793446   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:53.793451   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:53.793514   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:53.823828   57716 cri.go:89] found id: ""
	I1210 05:57:53.823842   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.823849   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:53.823857   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:53.823868   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:53.834565   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:53.834583   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:53.898035   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:53.890056   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.890844   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.892495   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.893000   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.894620   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:53.890056   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.890844   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.892495   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.893000   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.894620   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:53.898052   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:53.898063   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:53.960027   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:53.960044   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:53.988584   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:53.988600   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:56.551892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:56.562044   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:56.562109   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:56.587872   57716 cri.go:89] found id: ""
	I1210 05:57:56.587889   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.587897   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:56.587902   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:56.587967   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:56.613907   57716 cri.go:89] found id: ""
	I1210 05:57:56.613920   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.613927   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:56.613932   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:56.613988   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:56.638685   57716 cri.go:89] found id: ""
	I1210 05:57:56.638699   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.638706   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:56.638711   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:56.638768   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:56.665211   57716 cri.go:89] found id: ""
	I1210 05:57:56.665225   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.665232   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:56.665237   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:56.665295   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:56.696149   57716 cri.go:89] found id: ""
	I1210 05:57:56.696163   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.696169   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:56.696174   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:56.696231   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:56.728016   57716 cri.go:89] found id: ""
	I1210 05:57:56.728029   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.728036   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:56.728042   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:56.728104   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:56.752871   57716 cri.go:89] found id: ""
	I1210 05:57:56.752886   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.752894   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:56.752901   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:56.752913   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:56.783267   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:56.783283   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:56.842023   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:56.842046   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:56.853533   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:56.853549   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:56.914976   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:56.907206   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.907987   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909541   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909854   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.911455   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:56.907206   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.907987   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909541   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909854   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.911455   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:56.914988   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:56.915000   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:59.477082   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:59.487185   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:59.487242   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:59.511535   57716 cri.go:89] found id: ""
	I1210 05:57:59.511549   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.511556   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:59.511562   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:59.511639   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:59.536235   57716 cri.go:89] found id: ""
	I1210 05:57:59.536249   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.536265   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:59.536271   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:59.536329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:59.560801   57716 cri.go:89] found id: ""
	I1210 05:57:59.560815   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.560821   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:59.560827   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:59.560890   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:59.586232   57716 cri.go:89] found id: ""
	I1210 05:57:59.586247   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.586273   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:59.586279   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:59.586343   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:59.610087   57716 cri.go:89] found id: ""
	I1210 05:57:59.610101   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.610108   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:59.610113   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:59.610170   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:59.634249   57716 cri.go:89] found id: ""
	I1210 05:57:59.634263   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.634270   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:59.634275   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:59.634333   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:59.659066   57716 cri.go:89] found id: ""
	I1210 05:57:59.659100   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.659106   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:59.659115   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:59.659125   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:59.670606   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:59.670622   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:59.744825   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:59.737528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.737938   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739905   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.741403   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:59.737528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.737938   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739905   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.741403   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:59.744835   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:59.744847   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:59.806075   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:59.806092   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:59.841753   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:59.841769   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:02.400095   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:02.410925   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:02.410999   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:02.435337   57716 cri.go:89] found id: ""
	I1210 05:58:02.435351   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.435358   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:02.435363   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:02.435421   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:02.459273   57716 cri.go:89] found id: ""
	I1210 05:58:02.459287   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.459294   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:02.459299   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:02.459369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:02.484838   57716 cri.go:89] found id: ""
	I1210 05:58:02.484859   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.484867   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:02.484872   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:02.484930   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:02.513703   57716 cri.go:89] found id: ""
	I1210 05:58:02.513718   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.513732   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:02.513738   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:02.513799   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:02.537442   57716 cri.go:89] found id: ""
	I1210 05:58:02.537456   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.537472   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:02.537478   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:02.537538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:02.562811   57716 cri.go:89] found id: ""
	I1210 05:58:02.562824   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.562831   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:02.562837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:02.562904   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:02.593233   57716 cri.go:89] found id: ""
	I1210 05:58:02.593247   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.593254   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:02.593263   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:02.593283   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:02.649484   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:02.649502   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:02.668256   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:02.668270   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:02.746961   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:02.738936   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.739525   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741090   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741618   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.743247   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:02.738936   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.739525   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741090   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741618   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.743247   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:02.746984   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:02.746995   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:02.810434   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:02.810451   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:05.338812   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:05.348929   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:05.349015   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:05.376460   57716 cri.go:89] found id: ""
	I1210 05:58:05.376474   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.376481   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:05.376486   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:05.376545   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:05.401572   57716 cri.go:89] found id: ""
	I1210 05:58:05.401585   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.401593   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:05.401598   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:05.401657   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:05.426804   57716 cri.go:89] found id: ""
	I1210 05:58:05.426820   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.426827   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:05.426832   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:05.426889   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:05.450557   57716 cri.go:89] found id: ""
	I1210 05:58:05.450570   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.450577   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:05.450583   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:05.450640   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:05.476587   57716 cri.go:89] found id: ""
	I1210 05:58:05.476601   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.476607   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:05.476612   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:05.476669   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:05.501716   57716 cri.go:89] found id: ""
	I1210 05:58:05.501730   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.501736   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:05.501742   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:05.501801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:05.526971   57716 cri.go:89] found id: ""
	I1210 05:58:05.526985   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.526992   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:05.527000   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:05.527050   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:05.585508   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:05.585527   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:05.596526   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:05.596542   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:05.661377   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:05.650856   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.651411   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653229   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653833   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.655530   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:05.650856   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.651411   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653229   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653833   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.655530   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:05.661388   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:05.661398   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:05.732863   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:05.732882   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:08.260047   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:08.270586   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:08.270648   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:08.298955   57716 cri.go:89] found id: ""
	I1210 05:58:08.298984   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.298992   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:08.298997   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:08.299088   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:08.326321   57716 cri.go:89] found id: ""
	I1210 05:58:08.326335   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.326342   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:08.326347   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:08.326410   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:08.350063   57716 cri.go:89] found id: ""
	I1210 05:58:08.350077   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.350095   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:08.350100   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:08.350157   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:08.374459   57716 cri.go:89] found id: ""
	I1210 05:58:08.374472   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.374480   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:08.374485   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:08.374549   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:08.398594   57716 cri.go:89] found id: ""
	I1210 05:58:08.398608   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.398615   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:08.398629   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:08.398685   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:08.423334   57716 cri.go:89] found id: ""
	I1210 05:58:08.423348   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.423355   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:08.423366   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:08.423424   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:08.448137   57716 cri.go:89] found id: ""
	I1210 05:58:08.448150   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.448157   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:08.448164   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:08.448175   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:08.510732   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:08.502942   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.503736   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505412   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505743   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.507339   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:08.502942   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.503736   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505412   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505743   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.507339   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:08.510751   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:08.510764   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:08.572194   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:08.572211   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:08.600446   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:08.600463   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:08.657452   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:08.657469   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:11.170762   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:11.180886   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:11.180951   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:11.205555   57716 cri.go:89] found id: ""
	I1210 05:58:11.205569   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.205584   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:11.205590   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:11.205664   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:11.233080   57716 cri.go:89] found id: ""
	I1210 05:58:11.233094   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.233101   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:11.233106   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:11.233164   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:11.257793   57716 cri.go:89] found id: ""
	I1210 05:58:11.257807   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.257814   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:11.257821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:11.257879   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:11.282030   57716 cri.go:89] found id: ""
	I1210 05:58:11.282042   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.282050   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:11.282055   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:11.282119   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:11.305111   57716 cri.go:89] found id: ""
	I1210 05:58:11.305125   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.305132   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:11.305138   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:11.305196   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:11.329236   57716 cri.go:89] found id: ""
	I1210 05:58:11.329250   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.329257   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:11.329264   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:11.329320   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:11.354605   57716 cri.go:89] found id: ""
	I1210 05:58:11.354620   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.354627   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:11.354635   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:11.354645   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:11.386130   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:11.386146   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:11.444254   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:11.444272   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:11.455429   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:11.455446   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:11.522092   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:11.513675   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.514592   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516162   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516767   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.518501   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:11.513675   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.514592   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516162   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516767   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.518501   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:11.522102   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:11.522112   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:14.084603   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:14.094719   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:14.094779   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:14.118507   57716 cri.go:89] found id: ""
	I1210 05:58:14.118520   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.118528   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:14.118533   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:14.118588   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:14.144079   57716 cri.go:89] found id: ""
	I1210 05:58:14.144093   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.144100   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:14.144105   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:14.144166   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:14.174736   57716 cri.go:89] found id: ""
	I1210 05:58:14.174750   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.174757   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:14.174762   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:14.174837   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:14.199688   57716 cri.go:89] found id: ""
	I1210 05:58:14.199709   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.199727   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:14.199733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:14.199789   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:14.227765   57716 cri.go:89] found id: ""
	I1210 05:58:14.227779   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.227786   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:14.227793   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:14.227853   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:14.256531   57716 cri.go:89] found id: ""
	I1210 05:58:14.256546   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.256554   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:14.256559   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:14.256628   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:14.281035   57716 cri.go:89] found id: ""
	I1210 05:58:14.281054   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.281062   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:14.281070   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:14.281082   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:14.307632   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:14.307647   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:14.363636   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:14.363655   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:14.374356   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:14.374372   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:14.439204   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:14.431102   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.431970   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.433831   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.434179   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.435681   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:14.431102   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.431970   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.433831   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.434179   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.435681   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:14.439214   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:14.439227   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:17.000609   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:17.011094   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:17.011152   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:17.034914   57716 cri.go:89] found id: ""
	I1210 05:58:17.034928   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.034935   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:17.034940   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:17.034997   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:17.059216   57716 cri.go:89] found id: ""
	I1210 05:58:17.059229   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.059236   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:17.059241   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:17.059297   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:17.084654   57716 cri.go:89] found id: ""
	I1210 05:58:17.084667   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.084674   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:17.084679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:17.084734   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:17.108452   57716 cri.go:89] found id: ""
	I1210 05:58:17.108465   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.108472   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:17.108477   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:17.108538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:17.131638   57716 cri.go:89] found id: ""
	I1210 05:58:17.131652   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.131660   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:17.131666   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:17.131724   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:17.157073   57716 cri.go:89] found id: ""
	I1210 05:58:17.157086   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.157093   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:17.157099   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:17.157155   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:17.181834   57716 cri.go:89] found id: ""
	I1210 05:58:17.181849   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.181856   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:17.181864   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:17.181874   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:17.237484   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:17.237500   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:17.248803   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:17.248818   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:17.312123   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:17.304256   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.304922   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.306601   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.307186   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.308722   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:17.304256   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.304922   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.306601   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.307186   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.308722   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:17.312135   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:17.312145   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:17.375552   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:17.375570   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:19.903470   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:19.915506   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:19.915564   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:19.947745   57716 cri.go:89] found id: ""
	I1210 05:58:19.947758   57716 logs.go:282] 0 containers: []
	W1210 05:58:19.947765   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:19.947771   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:19.947832   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:19.980662   57716 cri.go:89] found id: ""
	I1210 05:58:19.980676   57716 logs.go:282] 0 containers: []
	W1210 05:58:19.980683   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:19.980688   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:19.980746   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:20.014764   57716 cri.go:89] found id: ""
	I1210 05:58:20.014787   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.014795   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:20.014801   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:20.014868   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:20.043079   57716 cri.go:89] found id: ""
	I1210 05:58:20.043093   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.043100   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:20.043106   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:20.043168   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:20.071694   57716 cri.go:89] found id: ""
	I1210 05:58:20.071709   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.071717   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:20.071722   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:20.071785   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:20.097931   57716 cri.go:89] found id: ""
	I1210 05:58:20.097945   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.097952   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:20.097958   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:20.098028   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:20.122795   57716 cri.go:89] found id: ""
	I1210 05:58:20.122809   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.122816   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:20.122824   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:20.122835   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:20.133825   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:20.133840   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:20.194901   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:20.186552   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.187444   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189151   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189874   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.191519   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:20.186552   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.187444   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189151   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189874   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.191519   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:20.194911   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:20.194921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:20.256875   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:20.256894   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:20.283841   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:20.283857   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:22.843646   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:22.853725   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:22.853782   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:22.878310   57716 cri.go:89] found id: ""
	I1210 05:58:22.878325   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.878332   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:22.878336   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:22.878393   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:22.902470   57716 cri.go:89] found id: ""
	I1210 05:58:22.902483   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.902490   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:22.902495   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:22.902552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:22.929428   57716 cri.go:89] found id: ""
	I1210 05:58:22.929442   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.929449   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:22.929454   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:22.929512   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:22.962201   57716 cri.go:89] found id: ""
	I1210 05:58:22.962215   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.962222   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:22.962227   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:22.962286   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:22.988315   57716 cri.go:89] found id: ""
	I1210 05:58:22.988329   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.988336   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:22.988341   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:22.988397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:23.015788   57716 cri.go:89] found id: ""
	I1210 05:58:23.015801   57716 logs.go:282] 0 containers: []
	W1210 05:58:23.015818   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:23.015824   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:23.015895   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:23.040476   57716 cri.go:89] found id: ""
	I1210 05:58:23.040490   57716 logs.go:282] 0 containers: []
	W1210 05:58:23.040497   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:23.040505   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:23.040515   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:23.097263   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:23.097281   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:23.108339   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:23.108357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:23.174372   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:23.166022   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.166801   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168382   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168890   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.170644   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:23.166022   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.166801   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168382   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168890   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.170644   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:23.174382   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:23.174393   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:23.238417   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:23.238433   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:25.767502   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:25.777560   57716 kubeadm.go:602] duration metric: took 4m3.698254406s to restartPrimaryControlPlane
	W1210 05:58:25.777622   57716 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 05:58:25.777697   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 05:58:26.181572   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:58:26.194845   57716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:58:26.202430   57716 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:58:26.202489   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:58:26.210414   57716 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:58:26.210423   57716 kubeadm.go:158] found existing configuration files:
	
	I1210 05:58:26.210474   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:58:26.218226   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:58:26.218281   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:58:26.225499   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:58:26.233426   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:58:26.233479   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:58:26.240639   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:58:26.247882   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:58:26.247936   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:58:26.255235   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:58:26.263002   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:58:26.263069   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:58:26.270271   57716 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:58:26.308640   57716 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 05:58:26.308937   57716 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:58:26.373888   57716 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:58:26.373948   57716 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 05:58:26.373980   57716 kubeadm.go:319] OS: Linux
	I1210 05:58:26.374022   57716 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:58:26.374069   57716 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 05:58:26.374113   57716 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:58:26.374157   57716 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:58:26.374200   57716 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:58:26.374244   57716 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:58:26.374300   57716 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:58:26.374343   57716 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:58:26.374385   57716 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 05:58:26.445771   57716 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:58:26.445880   57716 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:58:26.445970   57716 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:58:26.455518   57716 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:58:26.460828   57716 out.go:252]   - Generating certificates and keys ...
	I1210 05:58:26.460930   57716 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:58:26.461006   57716 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:58:26.461110   57716 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 05:58:26.461178   57716 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 05:58:26.461260   57716 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 05:58:26.461325   57716 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 05:58:26.461413   57716 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 05:58:26.461483   57716 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 05:58:26.461565   57716 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 05:58:26.461644   57716 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 05:58:26.461682   57716 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 05:58:26.461743   57716 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:58:26.520044   57716 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:58:27.005643   57716 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:58:27.519831   57716 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:58:27.780223   57716 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:58:28.060883   57716 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:58:28.061559   57716 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:58:28.064834   57716 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:58:28.067981   57716 out.go:252]   - Booting up control plane ...
	I1210 05:58:28.068070   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:58:28.068143   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:58:28.069383   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:58:28.090093   57716 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:58:28.090188   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:58:28.097949   57716 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:58:28.098042   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:58:28.098080   57716 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:58:28.241595   57716 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:58:28.241705   57716 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:02:28.236858   57716 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00011534s
	I1210 06:02:28.236887   57716 kubeadm.go:319] 
	I1210 06:02:28.236942   57716 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:02:28.236986   57716 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:02:28.237128   57716 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:02:28.237135   57716 kubeadm.go:319] 
	I1210 06:02:28.237233   57716 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:02:28.237262   57716 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:02:28.237291   57716 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:02:28.237295   57716 kubeadm.go:319] 
	I1210 06:02:28.241711   57716 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:02:28.242149   57716 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:02:28.242254   57716 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:02:28.242529   57716 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:02:28.242535   57716 kubeadm.go:319] 
	I1210 06:02:28.242598   57716 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:02:28.242730   57716 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00011534s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:02:28.242815   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:02:28.653276   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:02:28.666846   57716 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:02:28.666902   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:02:28.676196   57716 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:02:28.676206   57716 kubeadm.go:158] found existing configuration files:
	
	I1210 06:02:28.676262   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:02:28.683929   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:02:28.683984   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:02:28.691531   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:02:28.699193   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:02:28.699247   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:02:28.706499   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:02:28.713695   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:02:28.713761   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:02:28.721311   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:02:28.729191   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:02:28.729245   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:02:28.737059   57716 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:02:28.777392   57716 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:02:28.777754   57716 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:02:28.849302   57716 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:02:28.849368   57716 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:02:28.849403   57716 kubeadm.go:319] OS: Linux
	I1210 06:02:28.849460   57716 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:02:28.849508   57716 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:02:28.849555   57716 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:02:28.849602   57716 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:02:28.849649   57716 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:02:28.849696   57716 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:02:28.849745   57716 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:02:28.849792   57716 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:02:28.849837   57716 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:02:28.921564   57716 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:02:28.921662   57716 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:02:28.921748   57716 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:02:28.926509   57716 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:02:28.929904   57716 out.go:252]   - Generating certificates and keys ...
	I1210 06:02:28.929994   57716 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:02:28.930057   57716 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:02:28.930131   57716 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:02:28.930201   57716 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:02:28.930270   57716 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:02:28.930322   57716 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:02:28.930384   57716 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:02:28.930444   57716 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:02:28.930517   57716 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:02:28.930589   57716 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:02:28.930766   57716 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:02:28.930854   57716 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:02:29.206630   57716 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:02:29.720612   57716 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:02:29.887413   57716 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:02:30.011857   57716 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:02:30.197709   57716 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:02:30.198347   57716 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:02:30.201006   57716 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:02:30.204123   57716 out.go:252]   - Booting up control plane ...
	I1210 06:02:30.204220   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:02:30.204296   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:02:30.204794   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:02:30.227311   57716 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:02:30.227437   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:02:30.235547   57716 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:02:30.235634   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:02:30.235945   57716 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:02:30.373162   57716 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:02:30.373269   57716 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:06:30.371537   57716 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000138118s
	I1210 06:06:30.371561   57716 kubeadm.go:319] 
	I1210 06:06:30.371641   57716 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:06:30.371685   57716 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:06:30.371790   57716 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:06:30.371795   57716 kubeadm.go:319] 
	I1210 06:06:30.371898   57716 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:06:30.371929   57716 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:06:30.371959   57716 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:06:30.371962   57716 kubeadm.go:319] 
	I1210 06:06:30.376139   57716 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:06:30.376577   57716 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:06:30.376687   57716 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:06:30.376961   57716 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:06:30.376966   57716 kubeadm.go:319] 
	I1210 06:06:30.377035   57716 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:06:30.377094   57716 kubeadm.go:403] duration metric: took 12m8.33567442s to StartCluster
	I1210 06:06:30.377125   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:06:30.377187   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:06:30.401132   57716 cri.go:89] found id: ""
	I1210 06:06:30.401147   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.401154   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:30.401160   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:06:30.401219   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:06:30.437615   57716 cri.go:89] found id: ""
	I1210 06:06:30.437630   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.437637   57716 logs.go:284] No container was found matching "etcd"
	I1210 06:06:30.437642   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:06:30.437699   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:06:30.462667   57716 cri.go:89] found id: ""
	I1210 06:06:30.462681   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.462688   57716 logs.go:284] No container was found matching "coredns"
	I1210 06:06:30.462693   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:06:30.462752   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:06:30.491407   57716 cri.go:89] found id: ""
	I1210 06:06:30.491420   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.491428   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:30.491433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:06:30.491493   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:06:30.516073   57716 cri.go:89] found id: ""
	I1210 06:06:30.516086   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.516092   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:30.516098   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:06:30.516154   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:06:30.540636   57716 cri.go:89] found id: ""
	I1210 06:06:30.540649   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.540656   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:30.540679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:06:30.540736   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:06:30.565548   57716 cri.go:89] found id: ""
	I1210 06:06:30.565570   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.565578   57716 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:30.565586   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:30.565596   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:30.620548   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:30.620565   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:30.631284   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:30.631299   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:30.692450   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:30.684857   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.685223   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.686724   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.687076   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.688628   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:30.684857   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.685223   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.686724   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.687076   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.688628   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:30.692461   57716 logs.go:123] Gathering logs for containerd ...
	I1210 06:06:30.692471   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:06:30.755422   57716 logs.go:123] Gathering logs for container status ...
	I1210 06:06:30.755444   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 06:06:30.784033   57716 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:06:30.784067   57716 out.go:285] * 
	W1210 06:06:30.784157   57716 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:06:30.784176   57716 out.go:285] * 
	W1210 06:06:30.786468   57716 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:06:30.793223   57716 out.go:203] 
	W1210 06:06:30.796021   57716 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:06:30.796079   57716 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:06:30.796099   57716 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:06:30.799180   57716 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477949649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477963918Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477995246Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478012321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478021774Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478031620Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478040424Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478051649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478070291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478098854Z" level=info msg="Connect containerd service"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478383782Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478960226Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.497963642Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498025206Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498057067Z" level=info msg="Start subscribing containerd event"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498101696Z" level=info msg="Start recovering state"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526273092Z" level=info msg="Start event monitor"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526463774Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526536103Z" level=info msg="Start streaming server"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526593630Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526675700Z" level=info msg="runtime interface starting up..."
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526739774Z" level=info msg="starting plugins..."
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526805581Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 05:54:20 functional-644034 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.528842308Z" level=info msg="containerd successfully booted in 0.071400s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:32.068002   21645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:32.068758   21645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:32.070482   21645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:32.071094   21645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:32.072885   21645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 06:06:32 up 49 min,  0 user,  load average: 0.28, 0.19, 0.37
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:06:28 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:06:29 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 06:06:29 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:29 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:29 functional-644034 kubelet[21445]: E1210 06:06:29.708053   21445 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:06:29 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:06:29 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:06:30 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 06:06:30 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:30 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:30 functional-644034 kubelet[21464]: E1210 06:06:30.475971   21464 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:06:30 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:06:30 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:06:31 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 06:06:31 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:31 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:31 functional-644034 kubelet[21553]: E1210 06:06:31.173028   21553 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:06:31 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:06:31 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:06:31 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 06:06:31 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:31 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:31 functional-644034 kubelet[21623]: E1210 06:06:31.962273   21623 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:06:31 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:06:31 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (360.688711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (735.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-644034 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-644034 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (62.749592ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-644034 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 2 (300.562991ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-944360 image ls --format yaml --alsologtostderr                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ ssh     │ functional-944360 ssh pgrep buildkitd                                                                                                                 │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ image   │ functional-944360 image ls --format json --alsologtostderr                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image ls --format table --alsologtostderr                                                                                           │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image build -t localhost/my-image:functional-944360 testdata/build --alsologtostderr                                                │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ image   │ functional-944360 image ls                                                                                                                            │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ delete  │ -p functional-944360                                                                                                                                  │ functional-944360 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │ 10 Dec 25 05:39 UTC │
	│ start   │ -p functional-644034 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:39 UTC │                     │
	│ start   │ -p functional-644034 --alsologtostderr -v=8                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:47 UTC │                     │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add registry.k8s.io/pause:latest                                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache add minikube-local-cache-test:functional-644034                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ functional-644034 cache delete minikube-local-cache-test:functional-644034                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl images                                                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	│ cache   │ functional-644034 cache reload                                                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ kubectl │ functional-644034 kubectl -- --context functional-644034 get pods                                                                                     │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	│ start   │ -p functional-644034 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:54:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:54:17.426935   57716 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:54:17.427082   57716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:17.427086   57716 out.go:374] Setting ErrFile to fd 2...
	I1210 05:54:17.427090   57716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:17.427361   57716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:54:17.427717   57716 out.go:368] Setting JSON to false
	I1210 05:54:17.428531   57716 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2208,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:54:17.428587   57716 start.go:143] virtualization:  
	I1210 05:54:17.432151   57716 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:54:17.435955   57716 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:54:17.436010   57716 notify.go:221] Checking for updates...
	I1210 05:54:17.441966   57716 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:54:17.444885   57716 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:54:17.447901   57716 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:54:17.450919   57716 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:54:17.453767   57716 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:54:17.457197   57716 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:54:17.457296   57716 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:54:17.484154   57716 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:54:17.484249   57716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:54:17.544910   57716 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 05:54:17.535741476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:54:17.545002   57716 docker.go:319] overlay module found
	I1210 05:54:17.548056   57716 out.go:179] * Using the docker driver based on existing profile
	I1210 05:54:17.550880   57716 start.go:309] selected driver: docker
	I1210 05:54:17.550888   57716 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:17.550973   57716 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:54:17.551147   57716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:54:17.606051   57716 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 05:54:17.597194445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:54:17.606475   57716 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:54:17.606497   57716 cni.go:84] Creating CNI manager for ""
	I1210 05:54:17.606551   57716 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:54:17.606592   57716 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:17.611686   57716 out.go:179] * Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	I1210 05:54:17.614501   57716 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 05:54:17.617345   57716 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:54:17.620208   57716 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:54:17.620284   57716 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:54:17.639591   57716 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:54:17.639602   57716 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 05:54:17.674108   57716 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 05:54:17.814864   57716 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 05:54:17.815057   57716 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json ...
	I1210 05:54:17.815157   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:17.815311   57716 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:54:17.815341   57716 start.go:360] acquireMachinesLock for functional-644034: {Name:mk0dde6f976baac8ab90670ad27c806ab702c4c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:17.815383   57716 start.go:364] duration metric: took 26.643µs to acquireMachinesLock for "functional-644034"
	I1210 05:54:17.815394   57716 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:54:17.815398   57716 fix.go:54] fixHost starting: 
	I1210 05:54:17.815657   57716 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:54:17.832534   57716 fix.go:112] recreateIfNeeded on functional-644034: state=Running err=<nil>
	W1210 05:54:17.832556   57716 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:54:17.836244   57716 out.go:252] * Updating the running docker "functional-644034" container ...
	I1210 05:54:17.836271   57716 machine.go:94] provisionDockerMachine start ...
	I1210 05:54:17.836346   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:17.858100   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:17.858407   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:17.858412   57716 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:54:17.974240   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:18.011085   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:54:18.011101   57716 ubuntu.go:182] provisioning hostname "functional-644034"
	I1210 05:54:18.011170   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.035073   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:18.035392   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:18.035402   57716 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-644034 && echo "functional-644034" | sudo tee /etc/hostname
	I1210 05:54:18.133146   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:18.205140   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:54:18.205224   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.223112   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:18.223456   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:18.223470   57716 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-644034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644034/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-644034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:54:18.298229   57716 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298265   57716 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298312   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:54:18.298319   57716 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.857µs
	I1210 05:54:18.298326   57716 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:54:18.298329   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 05:54:18.298336   57716 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298351   57716 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 82.455µs
	I1210 05:54:18.298357   57716 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298363   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:54:18.298368   57716 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.182µs
	I1210 05:54:18.298372   57716 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:54:18.298368   57716 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298381   57716 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298411   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 05:54:18.298406   57716 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298417   57716 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 37.08µs
	I1210 05:54:18.298422   57716 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 05:54:18.298434   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 05:54:18.298430   57716 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298438   57716 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 33.1µs
	I1210 05:54:18.298443   57716 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 05:54:18.298232   57716 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298464   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 05:54:18.298468   57716 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 256.891µs
	I1210 05:54:18.298472   57716 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298474   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 05:54:18.298480   57716 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 50.314µs
	I1210 05:54:18.298482   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 05:54:18.298484   57716 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298489   57716 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 122.242µs
	I1210 05:54:18.298496   57716 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298511   57716 cache.go:87] Successfully saved all images to host disk.
	I1210 05:54:18.371362   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:54:18.371378   57716 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 05:54:18.371397   57716 ubuntu.go:190] setting up certificates
	I1210 05:54:18.371416   57716 provision.go:84] configureAuth start
	I1210 05:54:18.371483   57716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:54:18.389550   57716 provision.go:143] copyHostCerts
	I1210 05:54:18.389620   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 05:54:18.389627   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:54:18.389704   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 05:54:18.389803   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 05:54:18.389808   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:54:18.389833   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 05:54:18.389882   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 05:54:18.389885   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:54:18.389906   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 05:54:18.389948   57716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.functional-644034 san=[127.0.0.1 192.168.49.2 functional-644034 localhost minikube]
	I1210 05:54:18.683488   57716 provision.go:177] copyRemoteCerts
	I1210 05:54:18.683553   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:54:18.683598   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.701578   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:18.806523   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:54:18.823889   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:54:18.841176   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:54:18.858693   57716 provision.go:87] duration metric: took 487.253139ms to configureAuth
	I1210 05:54:18.858709   57716 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:54:18.858903   57716 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:54:18.858907   57716 machine.go:97] duration metric: took 1.02263281s to provisionDockerMachine
	I1210 05:54:18.858914   57716 start.go:293] postStartSetup for "functional-644034" (driver="docker")
	I1210 05:54:18.858924   57716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:54:18.858977   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:54:18.859033   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.876377   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:18.982817   57716 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:54:18.986081   57716 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:54:18.986098   57716 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:54:18.986108   57716 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 05:54:18.986162   57716 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 05:54:18.986244   57716 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 05:54:18.986314   57716 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> hosts in /etc/test/nested/copy/4116
	I1210 05:54:18.986361   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4116
	I1210 05:54:18.994265   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:54:19.014263   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts --> /etc/test/nested/copy/4116/hosts (40 bytes)
	I1210 05:54:19.031905   57716 start.go:296] duration metric: took 172.976805ms for postStartSetup
	I1210 05:54:19.031977   57716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:54:19.032030   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.049399   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.152285   57716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:54:19.157124   57716 fix.go:56] duration metric: took 1.341718894s for fixHost
	I1210 05:54:19.157140   57716 start.go:83] releasing machines lock for "functional-644034", held for 1.341749918s
	I1210 05:54:19.157254   57716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:54:19.178380   57716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:54:19.178438   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.178590   57716 ssh_runner.go:195] Run: cat /version.json
	I1210 05:54:19.178645   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.200917   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.208552   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.319193   57716 ssh_runner.go:195] Run: systemctl --version
	I1210 05:54:19.412255   57716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:54:19.416947   57716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:54:19.417021   57716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:54:19.424890   57716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:54:19.424903   57716 start.go:496] detecting cgroup driver to use...
	I1210 05:54:19.424932   57716 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:54:19.425004   57716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 05:54:19.440745   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:54:19.453977   57716 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:54:19.454039   57716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:54:19.469832   57716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:54:19.482994   57716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:54:19.599891   57716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:54:19.715074   57716 docker.go:234] disabling docker service ...
	I1210 05:54:19.715128   57716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:54:19.730660   57716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:54:19.743680   57716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:54:19.856717   57716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:54:20.006361   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:54:20.021419   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:54:20.038786   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.191836   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:54:20.201486   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 05:54:20.210685   57716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:54:20.210748   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:54:20.219896   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:54:20.228857   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:54:20.237489   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:54:20.246148   57716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:54:20.253998   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:54:20.262613   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:54:20.271236   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:54:20.280061   57716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:54:20.287623   57716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:54:20.295156   57716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:54:20.415485   57716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:54:20.529881   57716 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 05:54:20.529941   57716 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 05:54:20.533915   57716 start.go:564] Will wait 60s for crictl version
	I1210 05:54:20.533980   57716 ssh_runner.go:195] Run: which crictl
	I1210 05:54:20.537488   57716 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:54:20.562843   57716 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 05:54:20.562909   57716 ssh_runner.go:195] Run: containerd --version
	I1210 05:54:20.586515   57716 ssh_runner.go:195] Run: containerd --version
	I1210 05:54:20.613476   57716 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 05:54:20.616435   57716 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:54:20.632538   57716 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:54:20.639504   57716 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 05:54:20.642345   57716 kubeadm.go:884] updating cluster {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:54:20.642611   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.817647   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.968512   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:21.117681   57716 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:54:21.117754   57716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:54:21.141602   57716 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 05:54:21.141614   57716 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:54:21.141620   57716 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1210 05:54:21.141710   57716 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:54:21.141768   57716 ssh_runner.go:195] Run: sudo crictl info
	I1210 05:54:21.167304   57716 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 05:54:21.167327   57716 cni.go:84] Creating CNI manager for ""
	I1210 05:54:21.167335   57716 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:54:21.167343   57716 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:54:21.167363   57716 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644034 NodeName:functional-644034 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:54:21.167468   57716 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-644034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:54:21.167528   57716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:54:21.175157   57716 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:54:21.175220   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:54:21.182336   57716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 05:54:21.194714   57716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:54:21.206951   57716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1210 05:54:21.218855   57716 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:54:21.222543   57716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:54:21.341027   57716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:54:21.356762   57716 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034 for IP: 192.168.49.2
	I1210 05:54:21.356773   57716 certs.go:195] generating shared ca certs ...
	I1210 05:54:21.356789   57716 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:54:21.356923   57716 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 05:54:21.356964   57716 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 05:54:21.356970   57716 certs.go:257] generating profile certs ...
	I1210 05:54:21.357053   57716 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key
	I1210 05:54:21.357114   57716 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c
	I1210 05:54:21.357152   57716 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key
	I1210 05:54:21.357258   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 05:54:21.357288   57716 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 05:54:21.357307   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:54:21.357333   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:54:21.357354   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:54:21.357375   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 05:54:21.357423   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:54:21.357978   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:54:21.378744   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:54:21.397697   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:54:21.419957   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:54:21.438314   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:54:21.455834   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:54:21.473865   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:54:21.494612   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:54:21.512109   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:54:21.529720   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 05:54:21.547670   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 05:54:21.568707   57716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:54:21.582063   57716 ssh_runner.go:195] Run: openssl version
	I1210 05:54:21.588394   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.595862   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:54:21.603363   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.607193   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.607247   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.648234   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:54:21.655574   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.662804   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 05:54:21.670452   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.674182   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.674235   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.715273   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:54:21.722425   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.729498   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 05:54:21.736743   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.740323   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.740376   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.780972   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:54:21.788152   57716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:54:21.791770   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:54:21.832469   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:54:21.875333   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:54:21.915959   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:54:21.956552   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:54:21.998157   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:54:22.041430   57716 kubeadm.go:401] StartCluster: {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:22.041511   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 05:54:22.041600   57716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:54:22.071281   57716 cri.go:89] found id: ""
	I1210 05:54:22.071348   57716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:54:22.079286   57716 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:54:22.079296   57716 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:54:22.079350   57716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:54:22.086777   57716 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.087401   57716 kubeconfig.go:125] found "functional-644034" server: "https://192.168.49.2:8441"
	I1210 05:54:22.088728   57716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:54:22.096851   57716 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 05:39:45.645176984 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 05:54:21.211483495 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 05:54:22.096860   57716 kubeadm.go:1161] stopping kube-system containers ...
	I1210 05:54:22.096878   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1210 05:54:22.096937   57716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:54:22.122240   57716 cri.go:89] found id: ""
	I1210 05:54:22.122301   57716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 05:54:22.139987   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:54:22.147655   57716 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 05:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 10 05:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 05:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 05:43 /etc/kubernetes/scheduler.conf
	
	I1210 05:54:22.147725   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:54:22.155240   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:54:22.163328   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.163381   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:54:22.170477   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:54:22.178188   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.178242   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:54:22.185324   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:54:22.192557   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.192613   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:54:22.199756   57716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:54:22.207462   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:22.254516   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:23.834868   57716 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.580327189s)
	I1210 05:54:23.834928   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.033268   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.102476   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.150822   57716 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:54:24.150892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:24.651134   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:25.151026   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:25.651869   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:26.151216   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:26.651981   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:27.151958   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:27.651059   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:28.151711   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:28.651801   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:29.151170   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:29.651851   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:30.151157   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:30.651654   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:31.151084   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:31.651758   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:32.151508   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:32.651099   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:33.151680   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:33.651643   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:34.151101   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:34.651107   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:35.150988   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:35.651892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:36.151153   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:36.651103   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:37.151414   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:37.651563   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:38.151178   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:38.651401   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:39.150956   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:39.650979   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:40.151904   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:40.651104   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:41.151273   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:41.651040   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:42.151823   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:42.651144   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:43.151448   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:43.651999   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:44.151103   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:44.651111   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:45.151308   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:45.651953   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:46.151727   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:46.651656   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:47.151732   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:47.651342   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:48.151209   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:48.651132   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:49.151140   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:49.651706   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:50.151487   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:50.651289   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:51.150961   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:51.651096   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:52.150968   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:52.651629   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:53.151897   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:53.651111   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:54.151375   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:54.651108   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:55.151036   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:55.651733   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:56.151260   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:56.651152   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:57.150960   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:57.651169   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:58.151105   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:58.651487   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:59.151042   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:59.651058   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:00.151456   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:00.650980   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:01.151155   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:01.651260   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:02.151783   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:02.651522   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:03.151955   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:03.651242   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:04.151318   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:04.651176   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:05.151161   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:05.651848   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:06.151100   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:06.651828   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:07.151113   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:07.651938   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:08.151467   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:08.651101   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:09.151624   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:09.651209   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:10.151745   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:10.651031   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:11.151720   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:11.651857   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:12.151769   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:12.651470   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:13.151212   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:13.651104   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:14.151106   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:14.651144   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:15.151130   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:15.652008   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:16.151440   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:16.651880   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:17.151343   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:17.651404   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:18.150959   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:18.651272   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:19.151991   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:19.651605   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:20.151125   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:20.651248   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:21.151762   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:21.651604   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:22.151314   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:22.651440   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:23.151928   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:23.651890   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:24.151853   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:24.151952   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:24.176715   57716 cri.go:89] found id: ""
	I1210 05:55:24.176729   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.176736   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:24.176741   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:24.176801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:24.199798   57716 cri.go:89] found id: ""
	I1210 05:55:24.199811   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.199819   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:24.199824   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:24.199881   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:24.223446   57716 cri.go:89] found id: ""
	I1210 05:55:24.223459   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.223466   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:24.223471   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:24.223533   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:24.247963   57716 cri.go:89] found id: ""
	I1210 05:55:24.247976   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.247984   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:24.247989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:24.248052   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:24.271064   57716 cri.go:89] found id: ""
	I1210 05:55:24.271078   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.271085   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:24.271090   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:24.271156   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:24.295582   57716 cri.go:89] found id: ""
	I1210 05:55:24.295595   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.295603   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:24.295608   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:24.295665   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:24.319439   57716 cri.go:89] found id: ""
	I1210 05:55:24.319459   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.319466   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:24.319474   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:24.319484   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:24.374536   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:24.374555   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:24.385677   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:24.385693   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:24.468968   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:24.460916   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.461690   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463385   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463760   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.465289   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:24.460916   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.461690   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463385   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463760   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.465289   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:24.468989   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:24.469008   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:24.534097   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:24.534114   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:27.065851   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:27.076794   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:27.076855   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:27.102051   57716 cri.go:89] found id: ""
	I1210 05:55:27.102064   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.102072   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:27.102087   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:27.102159   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:27.125833   57716 cri.go:89] found id: ""
	I1210 05:55:27.125846   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.125853   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:27.125858   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:27.125916   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:27.150782   57716 cri.go:89] found id: ""
	I1210 05:55:27.150795   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.150803   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:27.150808   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:27.150870   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:27.177446   57716 cri.go:89] found id: ""
	I1210 05:55:27.177459   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.177467   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:27.177472   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:27.177530   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:27.202542   57716 cri.go:89] found id: ""
	I1210 05:55:27.202557   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.202564   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:27.202570   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:27.202631   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:27.229302   57716 cri.go:89] found id: ""
	I1210 05:55:27.229316   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.229323   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:27.229328   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:27.229389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:27.258140   57716 cri.go:89] found id: ""
	I1210 05:55:27.258154   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.258162   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:27.258170   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:27.258179   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:27.313276   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:27.313296   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:27.324237   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:27.324252   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:27.386291   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:27.378930   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.379718   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381201   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381605   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.383124   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:27.378930   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.379718   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381201   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381605   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.383124   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:27.386311   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:27.386321   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:27.451779   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:27.451797   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:29.984865   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:29.994990   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:29.995106   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:30.034785   57716 cri.go:89] found id: ""
	I1210 05:55:30.034800   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.034808   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:30.034815   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:30.034899   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:30.063792   57716 cri.go:89] found id: ""
	I1210 05:55:30.063807   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.063816   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:30.063821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:30.063895   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:30.095916   57716 cri.go:89] found id: ""
	I1210 05:55:30.095931   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.095939   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:30.095945   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:30.096020   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:30.123266   57716 cri.go:89] found id: ""
	I1210 05:55:30.123293   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.123300   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:30.123306   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:30.123378   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:30.149145   57716 cri.go:89] found id: ""
	I1210 05:55:30.149159   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.149167   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:30.149173   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:30.149231   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:30.178515   57716 cri.go:89] found id: ""
	I1210 05:55:30.178529   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.178536   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:30.178541   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:30.178601   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:30.202938   57716 cri.go:89] found id: ""
	I1210 05:55:30.202952   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.202959   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:30.202968   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:30.202977   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:30.262024   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:30.262042   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:30.273395   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:30.273411   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:30.339082   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:30.331422   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.332246   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.333884   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.334216   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.335714   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:30.331422   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.332246   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.333884   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.334216   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.335714   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:30.339099   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:30.339111   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:30.401574   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:30.401599   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:32.947286   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:32.957296   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:32.957360   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:32.982165   57716 cri.go:89] found id: ""
	I1210 05:55:32.982179   57716 logs.go:282] 0 containers: []
	W1210 05:55:32.982186   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:32.982191   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:32.982247   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:33.020504   57716 cri.go:89] found id: ""
	I1210 05:55:33.020517   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.020525   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:33.020530   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:33.020590   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:33.045171   57716 cri.go:89] found id: ""
	I1210 05:55:33.045185   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.045193   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:33.045198   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:33.045261   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:33.069898   57716 cri.go:89] found id: ""
	I1210 05:55:33.069923   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.069931   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:33.069936   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:33.070003   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:33.094592   57716 cri.go:89] found id: ""
	I1210 05:55:33.094607   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.094614   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:33.094619   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:33.094687   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:33.119752   57716 cri.go:89] found id: ""
	I1210 05:55:33.119765   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.119772   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:33.119778   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:33.119842   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:33.144728   57716 cri.go:89] found id: ""
	I1210 05:55:33.144742   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.144749   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:33.144757   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:33.144767   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:33.202510   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:33.202527   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:33.213898   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:33.213914   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:33.276996   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:33.269599   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.270004   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.271689   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.272071   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.273649   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:33.269599   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.270004   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.271689   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.272071   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.273649   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:33.277006   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:33.277016   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:33.337654   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:33.337675   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:35.867520   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:35.877494   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:35.877552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:35.903487   57716 cri.go:89] found id: ""
	I1210 05:55:35.903501   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.903508   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:35.903514   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:35.903571   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:35.933040   57716 cri.go:89] found id: ""
	I1210 05:55:35.933054   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.933060   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:35.933066   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:35.933150   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:35.956439   57716 cri.go:89] found id: ""
	I1210 05:55:35.956453   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.956460   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:35.956466   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:35.956522   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:35.983120   57716 cri.go:89] found id: ""
	I1210 05:55:35.983133   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.983140   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:35.983155   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:35.983213   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:36.024072   57716 cri.go:89] found id: ""
	I1210 05:55:36.024085   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.024093   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:36.024098   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:36.024163   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:36.050259   57716 cri.go:89] found id: ""
	I1210 05:55:36.050282   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.050289   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:36.050296   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:36.050375   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:36.079897   57716 cri.go:89] found id: ""
	I1210 05:55:36.079911   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.079918   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:36.079925   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:36.079935   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:36.109390   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:36.109405   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:36.164390   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:36.164407   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:36.175368   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:36.175383   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:36.247833   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:36.240230   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.240985   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.242571   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.243126   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.244643   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:36.240230   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.240985   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.242571   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.243126   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.244643   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:36.247845   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:36.247855   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:38.808939   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:38.819051   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:38.819128   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:38.843620   57716 cri.go:89] found id: ""
	I1210 05:55:38.843643   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.843650   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:38.843656   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:38.843713   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:38.872120   57716 cri.go:89] found id: ""
	I1210 05:55:38.872134   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.872141   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:38.872147   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:38.872204   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:38.896725   57716 cri.go:89] found id: ""
	I1210 05:55:38.896738   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.896746   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:38.896751   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:38.896807   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:38.924643   57716 cri.go:89] found id: ""
	I1210 05:55:38.924657   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.924665   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:38.924670   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:38.924729   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:38.952693   57716 cri.go:89] found id: ""
	I1210 05:55:38.952706   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.952714   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:38.952719   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:38.952774   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:38.976175   57716 cri.go:89] found id: ""
	I1210 05:55:38.976189   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.976196   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:38.976201   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:38.976266   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:39.001657   57716 cri.go:89] found id: ""
	I1210 05:55:39.001671   57716 logs.go:282] 0 containers: []
	W1210 05:55:39.001678   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:39.001686   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:39.001698   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:39.013220   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:39.013240   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:39.084372   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:39.073429   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.077278   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.078119   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.079595   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.080038   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:39.073429   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.077278   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.078119   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.079595   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.080038   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:39.084383   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:39.084393   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:39.145338   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:39.145357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:39.173909   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:39.173925   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:41.731159   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:41.741270   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:41.741329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:41.765933   57716 cri.go:89] found id: ""
	I1210 05:55:41.765946   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.765953   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:41.765958   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:41.766034   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:41.790822   57716 cri.go:89] found id: ""
	I1210 05:55:41.790842   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.790850   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:41.790855   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:41.790924   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:41.817287   57716 cri.go:89] found id: ""
	I1210 05:55:41.817300   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.817312   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:41.817318   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:41.817386   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:41.842964   57716 cri.go:89] found id: ""
	I1210 05:55:41.842978   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.842986   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:41.842991   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:41.843068   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:41.871615   57716 cri.go:89] found id: ""
	I1210 05:55:41.871629   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.871637   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:41.871642   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:41.871699   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:41.896188   57716 cri.go:89] found id: ""
	I1210 05:55:41.896216   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.896223   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:41.896229   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:41.896294   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:41.930282   57716 cri.go:89] found id: ""
	I1210 05:55:41.930296   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.930303   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:41.930311   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:41.930320   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:41.985380   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:41.985397   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:42.004532   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:42.004551   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:42.075101   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:42.065585   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.066618   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.068626   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.069338   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.071222   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:42.065585   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.066618   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.068626   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.069338   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.071222   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:42.075129   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:42.075143   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:42.145894   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:42.145929   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:44.679885   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:44.690876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:44.690937   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:44.720897   57716 cri.go:89] found id: ""
	I1210 05:55:44.720911   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.720918   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:44.720923   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:44.720983   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:44.745408   57716 cri.go:89] found id: ""
	I1210 05:55:44.745421   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.745427   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:44.745432   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:44.745495   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:44.773707   57716 cri.go:89] found id: ""
	I1210 05:55:44.773721   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.773728   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:44.773733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:44.773792   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:44.798508   57716 cri.go:89] found id: ""
	I1210 05:55:44.798522   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.798529   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:44.798535   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:44.798597   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:44.822493   57716 cri.go:89] found id: ""
	I1210 05:55:44.822507   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.822515   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:44.822519   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:44.822578   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:44.847294   57716 cri.go:89] found id: ""
	I1210 05:55:44.847308   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.847316   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:44.847321   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:44.847380   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:44.870447   57716 cri.go:89] found id: ""
	I1210 05:55:44.870460   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.870468   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:44.870475   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:44.870485   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:44.926160   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:44.926177   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:44.937022   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:44.937037   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:45.007191   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:44.990257   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.990902   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992455   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992971   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:45.000009   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:44.990257   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.990902   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992455   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992971   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:45.000009   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:45.007203   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:45.007215   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:45.103439   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:45.103467   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:47.653520   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:47.663666   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:47.663731   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:47.697444   57716 cri.go:89] found id: ""
	I1210 05:55:47.697457   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.697464   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:47.697469   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:47.697529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:47.728308   57716 cri.go:89] found id: ""
	I1210 05:55:47.728322   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.728329   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:47.728334   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:47.728391   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:47.753518   57716 cri.go:89] found id: ""
	I1210 05:55:47.753531   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.753538   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:47.753543   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:47.753600   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:47.777296   57716 cri.go:89] found id: ""
	I1210 05:55:47.777309   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.777316   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:47.777322   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:47.777378   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:47.800977   57716 cri.go:89] found id: ""
	I1210 05:55:47.800998   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.801005   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:47.801010   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:47.801067   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:47.825052   57716 cri.go:89] found id: ""
	I1210 05:55:47.825065   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.825073   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:47.825078   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:47.825147   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:47.848863   57716 cri.go:89] found id: ""
	I1210 05:55:47.848876   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.848883   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:47.848892   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:47.848902   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:47.905124   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:47.905139   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:47.915783   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:47.915800   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:47.980730   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:47.973198   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.973889   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.975569   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.976034   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.977547   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:47.973198   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.973889   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.975569   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.976034   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.977547   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:47.980740   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:47.980750   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:48.042937   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:48.042955   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:50.581353   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:50.591210   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:50.591269   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:50.620774   57716 cri.go:89] found id: ""
	I1210 05:55:50.620788   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.620794   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:50.620800   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:50.620864   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:50.645050   57716 cri.go:89] found id: ""
	I1210 05:55:50.645064   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.645071   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:50.645082   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:50.645146   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:50.679878   57716 cri.go:89] found id: ""
	I1210 05:55:50.679890   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.679897   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:50.679903   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:50.679960   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:50.710005   57716 cri.go:89] found id: ""
	I1210 05:55:50.710018   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.710026   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:50.710032   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:50.710088   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:50.744288   57716 cri.go:89] found id: ""
	I1210 05:55:50.744302   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.744311   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:50.744317   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:50.744373   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:50.767954   57716 cri.go:89] found id: ""
	I1210 05:55:50.767967   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.767974   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:50.767980   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:50.768037   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:50.796157   57716 cri.go:89] found id: ""
	I1210 05:55:50.796171   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.796179   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:50.796186   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:50.796196   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:50.851621   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:50.851638   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:50.863074   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:50.863091   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:50.939619   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:50.930950   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.931767   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.933629   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.934152   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.935732   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:50.930950   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.931767   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.933629   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.934152   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.935732   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:50.939629   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:50.939639   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:51.008577   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:51.008598   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:53.537065   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:53.546821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:53.546878   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:53.571853   57716 cri.go:89] found id: ""
	I1210 05:55:53.571867   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.571874   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:53.571879   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:53.571937   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:53.595941   57716 cri.go:89] found id: ""
	I1210 05:55:53.595955   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.595962   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:53.595967   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:53.596023   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:53.620466   57716 cri.go:89] found id: ""
	I1210 05:55:53.620480   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.620486   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:53.620492   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:53.620546   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:53.643628   57716 cri.go:89] found id: ""
	I1210 05:55:53.643641   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.643649   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:53.643655   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:53.643711   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:53.673517   57716 cri.go:89] found id: ""
	I1210 05:55:53.673532   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.673539   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:53.673545   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:53.673601   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:53.709885   57716 cri.go:89] found id: ""
	I1210 05:55:53.709899   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.709906   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:53.709911   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:53.709974   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:53.739765   57716 cri.go:89] found id: ""
	I1210 05:55:53.739778   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.739785   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:53.739792   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:53.739802   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:53.795061   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:53.795080   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:53.806101   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:53.806117   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:53.872226   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:53.863177   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.863668   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865274   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865802   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.867416   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:53.863177   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.863668   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865274   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865802   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.867416   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:53.872238   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:53.872248   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:53.933601   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:53.933619   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:56.466912   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:56.476796   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:56.476855   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:56.501021   57716 cri.go:89] found id: ""
	I1210 05:55:56.501035   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.501042   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:56.501048   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:56.501109   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:56.524562   57716 cri.go:89] found id: ""
	I1210 05:55:56.524576   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.524583   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:56.524588   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:56.524644   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:56.547648   57716 cri.go:89] found id: ""
	I1210 05:55:56.547662   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.547669   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:56.547674   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:56.547730   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:56.576863   57716 cri.go:89] found id: ""
	I1210 05:55:56.576876   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.576883   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:56.576895   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:56.576956   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:56.600963   57716 cri.go:89] found id: ""
	I1210 05:55:56.600977   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.600984   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:56.600989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:56.601049   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:56.624726   57716 cri.go:89] found id: ""
	I1210 05:55:56.624739   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.624747   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:56.624755   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:56.624816   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:56.657236   57716 cri.go:89] found id: ""
	I1210 05:55:56.657249   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.657261   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:56.657270   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:56.657280   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:56.697559   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:56.697576   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:56.757986   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:56.758004   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:56.769563   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:56.769579   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:56.830223   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:56.822784   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.823676   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825199   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825498   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.826965   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:56.822784   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.823676   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825199   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825498   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.826965   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:56.830233   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:56.830243   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:59.393208   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:59.403384   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:59.403452   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:59.428722   57716 cri.go:89] found id: ""
	I1210 05:55:59.428749   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.428757   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:59.428763   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:59.428833   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:59.453874   57716 cri.go:89] found id: ""
	I1210 05:55:59.453887   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.453895   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:59.453901   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:59.453962   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:59.478240   57716 cri.go:89] found id: ""
	I1210 05:55:59.478253   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.478260   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:59.478271   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:59.478329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:59.502468   57716 cri.go:89] found id: ""
	I1210 05:55:59.502482   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.502489   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:59.502494   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:59.502554   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:59.526784   57716 cri.go:89] found id: ""
	I1210 05:55:59.526797   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.526804   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:59.526809   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:59.526872   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:59.552473   57716 cri.go:89] found id: ""
	I1210 05:55:59.552486   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.552493   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:59.552499   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:59.552552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:59.576249   57716 cri.go:89] found id: ""
	I1210 05:55:59.576262   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.576269   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:59.576276   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:59.576288   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:59.631147   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:59.631169   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:59.642052   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:59.642067   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:59.721714   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:59.711733   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.712627   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714378   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714692   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.716946   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:59.711733   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.712627   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714378   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714692   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.716946   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:59.721733   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:59.721745   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:59.783216   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:59.783235   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:02.312967   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:02.323213   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:02.323279   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:02.347978   57716 cri.go:89] found id: ""
	I1210 05:56:02.347992   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.348011   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:02.348017   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:02.348073   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:02.372899   57716 cri.go:89] found id: ""
	I1210 05:56:02.372912   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.372920   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:02.372926   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:02.372985   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:02.396971   57716 cri.go:89] found id: ""
	I1210 05:56:02.396985   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.396992   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:02.396997   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:02.397057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:02.422416   57716 cri.go:89] found id: ""
	I1210 05:56:02.422430   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.422437   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:02.422443   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:02.422501   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:02.447977   57716 cri.go:89] found id: ""
	I1210 05:56:02.447990   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.448004   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:02.448009   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:02.448066   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:02.471774   57716 cri.go:89] found id: ""
	I1210 05:56:02.471788   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.471795   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:02.471800   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:02.471857   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:02.496057   57716 cri.go:89] found id: ""
	I1210 05:56:02.496072   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.496079   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:02.496088   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:02.496098   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:02.523576   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:02.523592   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:02.579266   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:02.579296   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:02.590792   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:02.590809   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:02.657064   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:02.648570   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.649296   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651116   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651750   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.653344   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:02.648570   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.649296   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651116   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651750   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.653344   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:02.657075   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:02.657085   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:05.229868   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:05.239953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:05.240012   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:05.264605   57716 cri.go:89] found id: ""
	I1210 05:56:05.264618   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.264626   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:05.264631   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:05.264689   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:05.288264   57716 cri.go:89] found id: ""
	I1210 05:56:05.288277   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.288285   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:05.288290   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:05.288354   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:05.313427   57716 cri.go:89] found id: ""
	I1210 05:56:05.313441   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.313448   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:05.313454   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:05.313510   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:05.344659   57716 cri.go:89] found id: ""
	I1210 05:56:05.344673   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.344680   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:05.344686   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:05.344743   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:05.369600   57716 cri.go:89] found id: ""
	I1210 05:56:05.369614   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.369621   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:05.369626   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:05.369683   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:05.397066   57716 cri.go:89] found id: ""
	I1210 05:56:05.397080   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.397088   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:05.397093   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:05.397150   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:05.422728   57716 cri.go:89] found id: ""
	I1210 05:56:05.422744   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.422751   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:05.422759   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:05.422770   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:05.485204   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:05.477114   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.477952   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479558   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479866   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.481321   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:05.477114   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.477952   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479558   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479866   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.481321   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:05.485215   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:05.485227   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:05.547693   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:05.547712   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:05.580471   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:05.580488   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:05.639350   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:05.639369   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:08.151149   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:08.162270   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:08.162351   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:08.189435   57716 cri.go:89] found id: ""
	I1210 05:56:08.189448   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.189455   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:08.189465   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:08.189530   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:08.218992   57716 cri.go:89] found id: ""
	I1210 05:56:08.219006   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.219031   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:08.219042   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:08.219100   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:08.245141   57716 cri.go:89] found id: ""
	I1210 05:56:08.245153   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.245160   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:08.245165   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:08.245221   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:08.273294   57716 cri.go:89] found id: ""
	I1210 05:56:08.273307   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.273314   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:08.273319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:08.273382   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:08.298396   57716 cri.go:89] found id: ""
	I1210 05:56:08.298410   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.298417   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:08.298422   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:08.298482   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:08.322670   57716 cri.go:89] found id: ""
	I1210 05:56:08.322684   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.322691   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:08.322696   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:08.322753   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:08.347986   57716 cri.go:89] found id: ""
	I1210 05:56:08.348000   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.348007   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:08.348015   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:08.348024   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:08.411052   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:08.411070   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:08.438849   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:08.438865   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:08.496560   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:08.496587   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:08.507905   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:08.507921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:08.573377   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:08.565623   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.566145   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.567826   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.568336   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.569867   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:08.565623   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.566145   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.567826   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.568336   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.569867   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:11.073585   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:11.083689   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:11.083757   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:11.108541   57716 cri.go:89] found id: ""
	I1210 05:56:11.108620   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.108628   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:11.108634   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:11.108694   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:11.134331   57716 cri.go:89] found id: ""
	I1210 05:56:11.134346   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.134353   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:11.134358   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:11.134417   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:11.158615   57716 cri.go:89] found id: ""
	I1210 05:56:11.158628   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.158635   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:11.158640   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:11.158698   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:11.183689   57716 cri.go:89] found id: ""
	I1210 05:56:11.183703   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.183710   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:11.183716   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:11.183775   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:11.207798   57716 cri.go:89] found id: ""
	I1210 05:56:11.207812   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.207819   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:11.207825   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:11.207882   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:11.236712   57716 cri.go:89] found id: ""
	I1210 05:56:11.236726   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.236734   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:11.236739   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:11.236801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:11.260759   57716 cri.go:89] found id: ""
	I1210 05:56:11.260773   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.260780   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:11.260788   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:11.260798   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:11.289769   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:11.289786   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:11.354319   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:11.354343   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:11.365879   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:11.365896   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:11.429322   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:11.420840   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.421615   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.423423   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.424052   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.425736   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:11.420840   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.421615   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.423423   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.424052   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.425736   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:11.429334   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:11.429347   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:13.992257   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:14.005684   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:14.005747   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:14.031213   57716 cri.go:89] found id: ""
	I1210 05:56:14.031233   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.031241   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:14.031246   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:14.031308   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:14.055927   57716 cri.go:89] found id: ""
	I1210 05:56:14.055941   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.055948   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:14.055953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:14.056011   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:14.080687   57716 cri.go:89] found id: ""
	I1210 05:56:14.080700   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.080707   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:14.080712   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:14.080770   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:14.108973   57716 cri.go:89] found id: ""
	I1210 05:56:14.108986   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.108993   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:14.108999   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:14.109057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:14.138949   57716 cri.go:89] found id: ""
	I1210 05:56:14.138963   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.138971   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:14.138976   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:14.139058   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:14.162184   57716 cri.go:89] found id: ""
	I1210 05:56:14.162199   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.162206   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:14.162211   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:14.162267   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:14.186846   57716 cri.go:89] found id: ""
	I1210 05:56:14.186859   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.186866   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:14.186874   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:14.186885   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:14.214982   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:14.214998   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:14.272262   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:14.272279   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:14.283290   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:14.283306   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:14.343519   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:14.335616   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.336321   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338030   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338568   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.340121   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:14.335616   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.336321   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338030   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338568   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.340121   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:14.343530   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:14.343541   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:16.905886   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:16.915932   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:16.915991   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:16.943689   57716 cri.go:89] found id: ""
	I1210 05:56:16.943703   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.943710   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:16.943715   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:16.943772   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:16.971692   57716 cri.go:89] found id: ""
	I1210 05:56:16.971705   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.971712   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:16.971717   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:16.971774   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:16.998705   57716 cri.go:89] found id: ""
	I1210 05:56:16.998721   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.998729   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:16.998734   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:16.998805   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:17.028716   57716 cri.go:89] found id: ""
	I1210 05:56:17.028730   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.028737   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:17.028743   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:17.028810   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:17.056330   57716 cri.go:89] found id: ""
	I1210 05:56:17.056344   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.056351   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:17.056355   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:17.056412   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:17.084606   57716 cri.go:89] found id: ""
	I1210 05:56:17.084620   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.084627   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:17.084633   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:17.084690   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:17.108463   57716 cri.go:89] found id: ""
	I1210 05:56:17.108476   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.108484   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:17.108492   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:17.108502   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:17.119206   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:17.119223   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:17.184513   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:17.176815   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.177383   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.178877   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.179482   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.181206   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:17.176815   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.177383   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.178877   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.179482   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.181206   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:17.184523   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:17.184533   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:17.249050   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:17.249068   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:17.277433   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:17.277448   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:19.835189   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:19.845211   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:19.845270   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:19.869437   57716 cri.go:89] found id: ""
	I1210 05:56:19.869451   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.869457   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:19.869463   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:19.869525   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:19.893666   57716 cri.go:89] found id: ""
	I1210 05:56:19.893680   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.893687   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:19.893691   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:19.893746   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:19.925851   57716 cri.go:89] found id: ""
	I1210 05:56:19.925864   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.925871   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:19.925876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:19.925934   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:19.953268   57716 cri.go:89] found id: ""
	I1210 05:56:19.953283   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.953289   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:19.953295   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:19.953352   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:19.980541   57716 cri.go:89] found id: ""
	I1210 05:56:19.980555   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.980562   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:19.980567   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:19.980629   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:20.014350   57716 cri.go:89] found id: ""
	I1210 05:56:20.014365   57716 logs.go:282] 0 containers: []
	W1210 05:56:20.014383   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:20.014389   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:20.014463   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:20.040904   57716 cri.go:89] found id: ""
	I1210 05:56:20.040918   57716 logs.go:282] 0 containers: []
	W1210 05:56:20.040926   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:20.040933   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:20.040943   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:20.097054   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:20.097072   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:20.108443   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:20.108459   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:20.173764   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:20.164932   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166475   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166965   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168506   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168930   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:20.164932   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166475   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166965   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168506   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168930   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:20.173773   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:20.173784   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:20.235116   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:20.235134   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:22.763516   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:22.773433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:22.773490   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:22.797542   57716 cri.go:89] found id: ""
	I1210 05:56:22.797556   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.797562   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:22.797568   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:22.797622   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:22.821893   57716 cri.go:89] found id: ""
	I1210 05:56:22.821907   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.821915   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:22.821920   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:22.821976   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:22.850542   57716 cri.go:89] found id: ""
	I1210 05:56:22.850557   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.850564   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:22.850569   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:22.850627   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:22.875288   57716 cri.go:89] found id: ""
	I1210 05:56:22.875301   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.875314   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:22.875320   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:22.875376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:22.900725   57716 cri.go:89] found id: ""
	I1210 05:56:22.900739   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.900747   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:22.900752   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:22.900808   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:22.931217   57716 cri.go:89] found id: ""
	I1210 05:56:22.931230   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.931237   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:22.931243   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:22.931309   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:22.963506   57716 cri.go:89] found id: ""
	I1210 05:56:22.963519   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.963525   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:22.963533   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:22.963542   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:23.025625   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:23.025643   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:23.036825   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:23.036841   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:23.100693   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:23.092404   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.093143   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.094913   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.095571   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.097307   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:23.092404   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.093143   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.094913   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.095571   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.097307   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:23.100703   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:23.100715   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:23.160995   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:23.161014   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:25.690455   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:25.700306   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:25.700369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:25.725916   57716 cri.go:89] found id: ""
	I1210 05:56:25.725931   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.725942   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:25.725948   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:25.726009   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:25.749914   57716 cri.go:89] found id: ""
	I1210 05:56:25.749927   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.749935   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:25.749939   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:25.749998   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:25.776070   57716 cri.go:89] found id: ""
	I1210 05:56:25.776083   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.776090   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:25.776095   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:25.776154   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:25.799518   57716 cri.go:89] found id: ""
	I1210 05:56:25.799532   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.799540   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:25.799546   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:25.799608   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:25.822990   57716 cri.go:89] found id: ""
	I1210 05:56:25.823057   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.823064   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:25.823072   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:25.823138   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:25.847416   57716 cri.go:89] found id: ""
	I1210 05:56:25.847430   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.847437   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:25.847442   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:25.847500   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:25.871819   57716 cri.go:89] found id: ""
	I1210 05:56:25.871833   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.871840   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:25.871849   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:25.871861   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:25.882590   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:25.882607   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:25.975908   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:25.961777   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.962673   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967132   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967485   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.972482   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:25.961777   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.962673   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967132   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967485   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.972482   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:25.975918   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:25.975929   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:26.042569   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:26.042588   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:26.070803   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:26.070819   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:28.629575   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:28.639457   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:28.639513   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:28.663811   57716 cri.go:89] found id: ""
	I1210 05:56:28.663824   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.663832   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:28.663837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:28.663892   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:28.688455   57716 cri.go:89] found id: ""
	I1210 05:56:28.688469   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.688476   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:28.688481   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:28.688538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:28.711872   57716 cri.go:89] found id: ""
	I1210 05:56:28.711886   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.711893   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:28.711898   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:28.711955   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:28.736153   57716 cri.go:89] found id: ""
	I1210 05:56:28.736166   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.736173   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:28.736181   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:28.736242   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:28.759991   57716 cri.go:89] found id: ""
	I1210 05:56:28.760011   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.760018   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:28.760023   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:28.760080   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:28.784928   57716 cri.go:89] found id: ""
	I1210 05:56:28.784942   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.784949   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:28.784955   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:28.785011   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:28.808330   57716 cri.go:89] found id: ""
	I1210 05:56:28.808343   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.808350   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:28.808359   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:28.808368   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:28.864140   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:28.864158   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:28.874997   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:28.875030   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:28.946271   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:28.938223   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.939058   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.940712   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.941043   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.942516   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:28.938223   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.939058   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.940712   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.941043   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.942516   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:28.946281   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:28.946291   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:29.015729   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:29.015750   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:31.546248   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:31.557000   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:31.557057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:31.581315   57716 cri.go:89] found id: ""
	I1210 05:56:31.581329   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.581336   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:31.581342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:31.581397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:31.606297   57716 cri.go:89] found id: ""
	I1210 05:56:31.606312   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.606327   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:31.606332   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:31.606389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:31.630600   57716 cri.go:89] found id: ""
	I1210 05:56:31.630614   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.630621   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:31.630627   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:31.630684   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:31.658929   57716 cri.go:89] found id: ""
	I1210 05:56:31.658942   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.658949   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:31.658955   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:31.659042   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:31.684421   57716 cri.go:89] found id: ""
	I1210 05:56:31.684434   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.684441   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:31.684456   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:31.684529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:31.708593   57716 cri.go:89] found id: ""
	I1210 05:56:31.708607   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.708614   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:31.708620   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:31.708678   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:31.733389   57716 cri.go:89] found id: ""
	I1210 05:56:31.733403   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.733411   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:31.733419   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:31.733429   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:31.762157   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:31.762171   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:31.818205   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:31.818222   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:31.829166   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:31.829182   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:31.894733   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:31.886837   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.887553   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889191   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889735   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.891344   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:31.886837   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.887553   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889191   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889735   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.891344   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:31.894745   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:31.894756   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:34.466636   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:34.477387   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:34.477462   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:34.508975   57716 cri.go:89] found id: ""
	I1210 05:56:34.508989   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.508996   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:34.509002   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:34.509058   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:34.536397   57716 cri.go:89] found id: ""
	I1210 05:56:34.536410   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.536417   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:34.536424   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:34.536482   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:34.560872   57716 cri.go:89] found id: ""
	I1210 05:56:34.560885   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.560892   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:34.560898   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:34.560959   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:34.585436   57716 cri.go:89] found id: ""
	I1210 05:56:34.585450   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.585457   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:34.585463   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:34.585520   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:34.609983   57716 cri.go:89] found id: ""
	I1210 05:56:34.609997   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.610004   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:34.610010   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:34.610065   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:34.634652   57716 cri.go:89] found id: ""
	I1210 05:56:34.634666   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.634674   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:34.634679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:34.634737   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:34.660417   57716 cri.go:89] found id: ""
	I1210 05:56:34.660431   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.660438   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:34.660446   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:34.660468   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:34.715849   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:34.715870   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:34.726672   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:34.726687   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:34.788897   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:34.781210   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.781759   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783378   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783973   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.785508   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:34.781210   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.781759   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783378   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783973   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.785508   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:34.788907   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:34.788917   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:34.850671   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:34.850690   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:37.378067   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:37.388018   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:37.388079   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:37.415590   57716 cri.go:89] found id: ""
	I1210 05:56:37.415604   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.415611   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:37.415617   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:37.415679   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:37.443166   57716 cri.go:89] found id: ""
	I1210 05:56:37.443179   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.443186   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:37.443192   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:37.443248   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:37.466187   57716 cri.go:89] found id: ""
	I1210 05:56:37.466201   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.466208   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:37.466214   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:37.466271   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:37.492297   57716 cri.go:89] found id: ""
	I1210 05:56:37.492321   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.492329   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:37.492335   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:37.492389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:37.515998   57716 cri.go:89] found id: ""
	I1210 05:56:37.516012   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.516018   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:37.516024   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:37.516083   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:37.540490   57716 cri.go:89] found id: ""
	I1210 05:56:37.540503   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.540510   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:37.540516   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:37.540576   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:37.565092   57716 cri.go:89] found id: ""
	I1210 05:56:37.565105   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.565111   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:37.565119   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:37.565137   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:37.625814   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:37.625837   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:37.637078   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:37.637104   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:37.697146   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:37.689936   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.690349   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691533   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691938   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.693652   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:37.689936   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.690349   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691533   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691938   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.693652   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:37.697156   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:37.697182   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:37.757019   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:37.757038   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:40.287595   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:40.298582   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:40.298641   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:40.322470   57716 cri.go:89] found id: ""
	I1210 05:56:40.322484   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.322491   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:40.322497   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:40.322552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:40.346764   57716 cri.go:89] found id: ""
	I1210 05:56:40.346778   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.346785   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:40.346790   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:40.346851   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:40.373286   57716 cri.go:89] found id: ""
	I1210 05:56:40.373300   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.373307   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:40.373313   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:40.373372   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:40.402348   57716 cri.go:89] found id: ""
	I1210 05:56:40.402361   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.402368   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:40.402373   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:40.402428   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:40.427030   57716 cri.go:89] found id: ""
	I1210 05:56:40.427044   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.427052   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:40.427057   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:40.427117   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:40.451451   57716 cri.go:89] found id: ""
	I1210 05:56:40.451478   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.451485   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:40.451491   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:40.451554   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:40.480083   57716 cri.go:89] found id: ""
	I1210 05:56:40.480100   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.480106   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:40.480114   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:40.480124   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:40.490894   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:40.490909   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:40.556681   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:40.549171   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.549844   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551479   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551814   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.553287   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:40.549171   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.549844   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551479   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551814   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.553287   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:40.556692   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:40.556702   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:40.619424   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:40.619443   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:40.652592   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:40.652608   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:43.210686   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:43.221608   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:43.221673   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:43.249950   57716 cri.go:89] found id: ""
	I1210 05:56:43.249964   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.249971   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:43.249977   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:43.250038   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:43.276671   57716 cri.go:89] found id: ""
	I1210 05:56:43.276685   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.276692   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:43.276697   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:43.276752   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:43.301078   57716 cri.go:89] found id: ""
	I1210 05:56:43.301092   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.301099   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:43.301105   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:43.301166   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:43.325712   57716 cri.go:89] found id: ""
	I1210 05:56:43.325725   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.325732   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:43.325753   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:43.325807   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:43.350013   57716 cri.go:89] found id: ""
	I1210 05:56:43.350027   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.350034   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:43.350039   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:43.350095   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:43.374239   57716 cri.go:89] found id: ""
	I1210 05:56:43.374253   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.374259   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:43.374265   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:43.374325   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:43.398684   57716 cri.go:89] found id: ""
	I1210 05:56:43.398697   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.398704   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:43.398713   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:43.398723   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:43.429674   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:43.429692   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:43.486606   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:43.486624   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:43.497851   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:43.497867   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:43.564988   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:43.556980   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.557595   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559286   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559906   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.561769   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:43.556980   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.557595   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559286   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559906   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.561769   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:43.565001   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:43.565011   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:46.128659   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:46.139799   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:46.139857   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:46.169381   57716 cri.go:89] found id: ""
	I1210 05:56:46.169395   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.169402   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:46.169408   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:46.169468   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:46.198882   57716 cri.go:89] found id: ""
	I1210 05:56:46.198896   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.198903   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:46.198909   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:46.198966   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:46.234049   57716 cri.go:89] found id: ""
	I1210 05:56:46.234064   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.234072   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:46.234077   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:46.234134   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:46.260031   57716 cri.go:89] found id: ""
	I1210 05:56:46.260044   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.260051   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:46.260057   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:46.260112   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:46.284339   57716 cri.go:89] found id: ""
	I1210 05:56:46.284353   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.284361   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:46.284366   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:46.284425   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:46.309943   57716 cri.go:89] found id: ""
	I1210 05:56:46.309957   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.309964   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:46.309970   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:46.310026   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:46.335200   57716 cri.go:89] found id: ""
	I1210 05:56:46.335215   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.335222   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:46.335235   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:46.335247   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:46.391563   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:46.391580   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:46.403485   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:46.403501   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:46.469778   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:46.461822   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.462325   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464066   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464772   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.466293   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:46.461822   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.462325   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464066   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464772   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.466293   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:46.469787   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:46.469798   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:46.533492   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:46.533510   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:49.061494   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:49.071430   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:49.071494   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:49.094941   57716 cri.go:89] found id: ""
	I1210 05:56:49.094961   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.094969   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:49.094974   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:49.095053   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:49.119980   57716 cri.go:89] found id: ""
	I1210 05:56:49.119994   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.120001   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:49.120006   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:49.120061   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:49.149253   57716 cri.go:89] found id: ""
	I1210 05:56:49.149267   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.149275   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:49.149280   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:49.149339   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:49.190394   57716 cri.go:89] found id: ""
	I1210 05:56:49.190407   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.190414   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:49.190419   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:49.190474   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:49.226315   57716 cri.go:89] found id: ""
	I1210 05:56:49.226328   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.226335   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:49.226340   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:49.226398   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:49.253703   57716 cri.go:89] found id: ""
	I1210 05:56:49.253716   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.253723   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:49.253729   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:49.253793   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:49.278595   57716 cri.go:89] found id: ""
	I1210 05:56:49.278609   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.278616   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:49.278633   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:49.278643   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:49.339769   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:49.339786   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:49.368179   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:49.368196   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:49.424135   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:49.424152   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:49.435251   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:49.435277   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:49.499081   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:49.491345   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.492104   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.493573   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.494053   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.495641   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:49.491345   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.492104   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.493573   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.494053   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.495641   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:52.000764   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:52.011936   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:52.011997   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:52.044999   57716 cri.go:89] found id: ""
	I1210 05:56:52.045013   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.045020   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:52.045026   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:52.045084   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:52.069248   57716 cri.go:89] found id: ""
	I1210 05:56:52.069262   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.069269   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:52.069274   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:52.069340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:52.098397   57716 cri.go:89] found id: ""
	I1210 05:56:52.098410   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.098428   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:52.098435   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:52.098500   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:52.126868   57716 cri.go:89] found id: ""
	I1210 05:56:52.126887   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.126905   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:52.126910   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:52.126965   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:52.150645   57716 cri.go:89] found id: ""
	I1210 05:56:52.150658   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.150666   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:52.150681   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:52.150740   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:52.186283   57716 cri.go:89] found id: ""
	I1210 05:56:52.186296   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.186304   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:52.186318   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:52.186374   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:52.218438   57716 cri.go:89] found id: ""
	I1210 05:56:52.218451   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.218458   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:52.218476   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:52.218486   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:52.281011   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:52.273152   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.273845   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.275592   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.276072   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.277623   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:52.273152   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.273845   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.275592   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.276072   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.277623   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:52.281021   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:52.281032   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:52.342042   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:52.342058   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:52.373121   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:52.373136   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:52.428970   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:52.428987   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:54.940399   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:54.950167   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:54.950228   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:54.974172   57716 cri.go:89] found id: ""
	I1210 05:56:54.974186   57716 logs.go:282] 0 containers: []
	W1210 05:56:54.974193   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:54.974199   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:54.974257   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:55.008246   57716 cri.go:89] found id: ""
	I1210 05:56:55.008262   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.008270   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:55.008275   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:55.008340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:55.034655   57716 cri.go:89] found id: ""
	I1210 05:56:55.034669   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.034676   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:55.034682   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:55.034741   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:55.063972   57716 cri.go:89] found id: ""
	I1210 05:56:55.063986   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.063994   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:55.063999   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:55.064057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:55.090263   57716 cri.go:89] found id: ""
	I1210 05:56:55.090275   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.090292   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:55.090298   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:55.090353   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:55.113407   57716 cri.go:89] found id: ""
	I1210 05:56:55.113421   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.113428   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:55.113433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:55.113491   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:55.140991   57716 cri.go:89] found id: ""
	I1210 05:56:55.141010   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.141018   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:55.141025   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:55.141036   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:55.201731   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:55.201749   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:55.218256   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:55.218270   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:55.290800   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:55.282984   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.283573   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285214   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285730   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.287308   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:55.282984   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.283573   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285214   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285730   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.287308   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:55.290811   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:55.290831   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:55.355200   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:55.355218   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:57.881741   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:57.891584   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:57.891646   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:57.918310   57716 cri.go:89] found id: ""
	I1210 05:56:57.918323   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.918330   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:57.918336   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:57.918391   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:57.942318   57716 cri.go:89] found id: ""
	I1210 05:56:57.942331   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.942338   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:57.942344   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:57.942402   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:57.966253   57716 cri.go:89] found id: ""
	I1210 05:56:57.966267   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.966274   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:57.966279   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:57.966338   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:57.990324   57716 cri.go:89] found id: ""
	I1210 05:56:57.990338   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.990346   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:57.990351   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:57.990414   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:58.021444   57716 cri.go:89] found id: ""
	I1210 05:56:58.021458   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.021466   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:58.021471   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:58.021529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:58.046661   57716 cri.go:89] found id: ""
	I1210 05:56:58.046680   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.046688   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:58.046699   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:58.046767   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:58.071123   57716 cri.go:89] found id: ""
	I1210 05:56:58.071137   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.071145   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:58.071153   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:58.071162   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:58.135978   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:58.135998   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:58.167638   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:58.167656   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:58.232589   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:58.232610   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:58.244347   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:58.244363   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:58.304989   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:58.297197   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.297898   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.299609   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.300132   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.301733   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:58.297197   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.297898   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.299609   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.300132   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.301733   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:00.806679   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:00.816733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:00.816793   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:00.845594   57716 cri.go:89] found id: ""
	I1210 05:57:00.845608   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.845615   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:00.845622   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:00.845682   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:00.880377   57716 cri.go:89] found id: ""
	I1210 05:57:00.880391   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.880399   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:00.880405   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:00.880463   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:00.904970   57716 cri.go:89] found id: ""
	I1210 05:57:00.904990   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.904997   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:00.905003   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:00.905063   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:00.933169   57716 cri.go:89] found id: ""
	I1210 05:57:00.933183   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.933191   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:00.933196   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:00.933255   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:00.962218   57716 cri.go:89] found id: ""
	I1210 05:57:00.962231   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.962238   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:00.962244   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:00.962301   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:00.987794   57716 cri.go:89] found id: ""
	I1210 05:57:00.987807   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.987814   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:00.987820   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:00.987879   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:01.014287   57716 cri.go:89] found id: ""
	I1210 05:57:01.014302   57716 logs.go:282] 0 containers: []
	W1210 05:57:01.014309   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:01.014318   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:01.014328   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:01.045925   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:01.045941   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:01.102696   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:01.102714   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:01.114077   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:01.114092   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:01.201703   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:01.177406   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.182687   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.186518   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.195186   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.196003   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:01.177406   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.182687   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.186518   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.195186   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.196003   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:01.201726   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:01.201738   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:03.774227   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:03.784265   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:03.784325   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:03.809259   57716 cri.go:89] found id: ""
	I1210 05:57:03.809273   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.809280   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:03.809285   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:03.809347   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:03.835314   57716 cri.go:89] found id: ""
	I1210 05:57:03.835329   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.835336   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:03.835342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:03.835401   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:03.860149   57716 cri.go:89] found id: ""
	I1210 05:57:03.860163   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.860170   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:03.860175   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:03.860243   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:03.886583   57716 cri.go:89] found id: ""
	I1210 05:57:03.886597   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.886604   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:03.886610   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:03.886669   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:03.915441   57716 cri.go:89] found id: ""
	I1210 05:57:03.915454   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.915462   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:03.915467   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:03.915528   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:03.939994   57716 cri.go:89] found id: ""
	I1210 05:57:03.940008   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.940015   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:03.940021   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:03.944397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:03.970729   57716 cri.go:89] found id: ""
	I1210 05:57:03.970742   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.970749   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:03.970757   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:03.970768   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:04.027596   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:04.027617   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:04.039557   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:04.039578   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:04.105314   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:04.097441   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.098313   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.099991   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.100340   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.101876   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:04.097441   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.098313   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.099991   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.100340   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.101876   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:04.105325   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:04.105336   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:04.167908   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:04.167927   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:06.703048   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:06.712953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:06.713014   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:06.740745   57716 cri.go:89] found id: ""
	I1210 05:57:06.740759   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.740766   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:06.740771   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:06.740826   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:06.764572   57716 cri.go:89] found id: ""
	I1210 05:57:06.764585   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.764592   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:06.764598   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:06.764654   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:06.792403   57716 cri.go:89] found id: ""
	I1210 05:57:06.792418   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.792425   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:06.792430   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:06.792488   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:06.816569   57716 cri.go:89] found id: ""
	I1210 05:57:06.816583   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.816591   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:06.816596   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:06.816659   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:06.841104   57716 cri.go:89] found id: ""
	I1210 05:57:06.841118   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.841125   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:06.841131   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:06.841191   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:06.863923   57716 cri.go:89] found id: ""
	I1210 05:57:06.863936   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.863943   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:06.863949   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:06.864004   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:06.889078   57716 cri.go:89] found id: ""
	I1210 05:57:06.889091   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.889099   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:06.889106   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:06.889116   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:06.943842   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:06.943863   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:06.954461   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:06.954477   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:07.025823   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:07.017208   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.017859   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.019473   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.020044   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.021782   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:07.017208   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.017859   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.019473   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.020044   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.021782   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:07.025833   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:07.025847   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:07.087136   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:07.087156   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:09.618129   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:09.627876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:09.627939   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:09.655385   57716 cri.go:89] found id: ""
	I1210 05:57:09.655399   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.655406   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:09.655411   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:09.655476   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:09.678439   57716 cri.go:89] found id: ""
	I1210 05:57:09.678453   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.678460   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:09.678466   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:09.678521   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:09.708049   57716 cri.go:89] found id: ""
	I1210 05:57:09.708063   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.708071   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:09.708076   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:09.708134   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:09.731272   57716 cri.go:89] found id: ""
	I1210 05:57:09.731286   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.731293   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:09.731298   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:09.731355   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:09.756542   57716 cri.go:89] found id: ""
	I1210 05:57:09.756556   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.756563   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:09.756569   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:09.756625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:09.782376   57716 cri.go:89] found id: ""
	I1210 05:57:09.782389   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.782396   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:09.782402   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:09.782469   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:09.806766   57716 cri.go:89] found id: ""
	I1210 05:57:09.806780   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.806787   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:09.806795   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:09.806806   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:09.817591   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:09.817607   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:09.877883   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:09.869907   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.870472   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872036   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872545   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.874081   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:09.869907   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.870472   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872036   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872545   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.874081   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:09.877897   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:09.877907   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:09.939799   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:09.939817   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:09.972539   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:09.972555   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:12.528080   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:12.538052   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:12.538112   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:12.561407   57716 cri.go:89] found id: ""
	I1210 05:57:12.561421   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.561429   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:12.561434   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:12.561504   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:12.587323   57716 cri.go:89] found id: ""
	I1210 05:57:12.587337   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.587344   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:12.587349   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:12.587407   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:12.611528   57716 cri.go:89] found id: ""
	I1210 05:57:12.611542   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.611550   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:12.611555   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:12.611613   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:12.639252   57716 cri.go:89] found id: ""
	I1210 05:57:12.639266   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.639273   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:12.639278   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:12.639340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:12.662845   57716 cri.go:89] found id: ""
	I1210 05:57:12.662858   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.662865   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:12.662871   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:12.662924   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:12.687312   57716 cri.go:89] found id: ""
	I1210 05:57:12.687325   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.687332   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:12.687338   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:12.687410   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:12.712443   57716 cri.go:89] found id: ""
	I1210 05:57:12.712456   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.712463   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:12.712471   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:12.712484   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:12.772312   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:12.772330   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:12.800589   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:12.800611   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:12.856815   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:12.856832   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:12.868411   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:12.868427   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:12.938613   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:12.928160   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.928723   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.932890   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.933536   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.935292   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:12.928160   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.928723   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.932890   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.933536   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.935292   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:15.439137   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:15.449933   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:15.450005   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:15.483755   57716 cri.go:89] found id: ""
	I1210 05:57:15.483769   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.483775   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:15.483781   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:15.483837   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:15.507520   57716 cri.go:89] found id: ""
	I1210 05:57:15.507534   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.507542   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:15.507547   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:15.507605   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:15.534553   57716 cri.go:89] found id: ""
	I1210 05:57:15.534566   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.534573   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:15.534578   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:15.534635   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:15.559360   57716 cri.go:89] found id: ""
	I1210 05:57:15.559374   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.559381   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:15.559386   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:15.559443   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:15.584591   57716 cri.go:89] found id: ""
	I1210 05:57:15.584607   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.584614   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:15.584619   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:15.584677   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:15.613451   57716 cri.go:89] found id: ""
	I1210 05:57:15.613471   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.613479   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:15.613485   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:15.613607   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:15.638843   57716 cri.go:89] found id: ""
	I1210 05:57:15.638858   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.638865   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:15.638874   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:15.638884   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:15.694185   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:15.694203   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:15.704709   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:15.704725   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:15.769534   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:15.761459   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.762286   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.763956   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.764609   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.766176   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:15.761459   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.762286   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.763956   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.764609   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.766176   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:15.769543   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:15.769556   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:15.830240   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:15.830258   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:18.356935   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:18.366837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:18.366896   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:18.391280   57716 cri.go:89] found id: ""
	I1210 05:57:18.391294   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.391301   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:18.391308   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:18.391376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:18.421532   57716 cri.go:89] found id: ""
	I1210 05:57:18.421546   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.421553   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:18.421558   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:18.421625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:18.455057   57716 cri.go:89] found id: ""
	I1210 05:57:18.455071   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.455078   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:18.455083   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:18.455153   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:18.488121   57716 cri.go:89] found id: ""
	I1210 05:57:18.488135   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.488142   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:18.488148   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:18.488210   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:18.511864   57716 cri.go:89] found id: ""
	I1210 05:57:18.511878   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.511886   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:18.511905   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:18.511966   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:18.535922   57716 cri.go:89] found id: ""
	I1210 05:57:18.535936   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.535957   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:18.535963   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:18.536029   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:18.560287   57716 cri.go:89] found id: ""
	I1210 05:57:18.560302   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.560309   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:18.560317   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:18.560328   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:18.627753   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:18.619860   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.620508   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622357   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622888   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.624346   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:18.619860   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.620508   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622357   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622888   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.624346   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:18.627764   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:18.627776   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:18.688471   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:18.688489   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:18.719143   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:18.719159   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:18.774435   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:18.774453   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:21.285722   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:21.295523   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:21.295582   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:21.322675   57716 cri.go:89] found id: ""
	I1210 05:57:21.322688   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.322696   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:21.322701   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:21.322758   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:21.347136   57716 cri.go:89] found id: ""
	I1210 05:57:21.347150   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.347157   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:21.347162   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:21.347219   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:21.372204   57716 cri.go:89] found id: ""
	I1210 05:57:21.372217   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.372224   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:21.372229   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:21.372283   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:21.395417   57716 cri.go:89] found id: ""
	I1210 05:57:21.395431   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.395438   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:21.395443   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:21.395515   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:21.440154   57716 cri.go:89] found id: ""
	I1210 05:57:21.440167   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.440174   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:21.440179   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:21.440240   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:21.473140   57716 cri.go:89] found id: ""
	I1210 05:57:21.473154   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.473166   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:21.473172   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:21.473227   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:21.501607   57716 cri.go:89] found id: ""
	I1210 05:57:21.501630   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.501638   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:21.501646   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:21.501657   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:21.534381   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:21.534397   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:21.591435   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:21.591454   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:21.602570   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:21.602586   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:21.665543   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:21.656612   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.657173   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659237   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659598   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.661250   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:21.656612   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.657173   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659237   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659598   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.661250   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:21.665553   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:21.665564   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:24.232360   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:24.242545   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:24.242605   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:24.268962   57716 cri.go:89] found id: ""
	I1210 05:57:24.268976   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.268983   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:24.268989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:24.269051   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:24.293625   57716 cri.go:89] found id: ""
	I1210 05:57:24.293638   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.293645   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:24.293650   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:24.293706   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:24.323101   57716 cri.go:89] found id: ""
	I1210 05:57:24.323115   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.323122   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:24.323127   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:24.323184   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:24.352417   57716 cri.go:89] found id: ""
	I1210 05:57:24.352431   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.352442   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:24.352448   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:24.352506   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:24.377825   57716 cri.go:89] found id: ""
	I1210 05:57:24.377839   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.377846   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:24.377851   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:24.377907   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:24.401476   57716 cri.go:89] found id: ""
	I1210 05:57:24.401490   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.401497   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:24.401502   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:24.401560   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:24.430784   57716 cri.go:89] found id: ""
	I1210 05:57:24.430798   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.430805   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:24.430813   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:24.430826   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:24.496086   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:24.496105   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:24.508163   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:24.508178   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:24.572343   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:24.563972   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.564594   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.566433   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.567204   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.568973   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:24.563972   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.564594   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.566433   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.567204   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.568973   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:24.572354   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:24.572365   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:24.634266   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:24.634284   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:27.162032   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:27.171692   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:27.171751   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:27.195293   57716 cri.go:89] found id: ""
	I1210 05:57:27.195306   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.195313   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:27.195319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:27.195375   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:27.223719   57716 cri.go:89] found id: ""
	I1210 05:57:27.223733   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.223741   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:27.223746   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:27.223805   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:27.249635   57716 cri.go:89] found id: ""
	I1210 05:57:27.249648   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.249655   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:27.249661   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:27.249718   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:27.274420   57716 cri.go:89] found id: ""
	I1210 05:57:27.274434   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.274443   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:27.274448   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:27.274515   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:27.302747   57716 cri.go:89] found id: ""
	I1210 05:57:27.302760   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.302777   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:27.302782   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:27.302842   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:27.327624   57716 cri.go:89] found id: ""
	I1210 05:57:27.327638   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.327645   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:27.327650   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:27.327710   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:27.351138   57716 cri.go:89] found id: ""
	I1210 05:57:27.351152   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.351159   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:27.351168   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:27.351179   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:27.416428   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:27.416448   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:27.458729   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:27.458746   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:27.517941   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:27.517959   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:27.528443   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:27.528459   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:27.592381   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:27.584705   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.585249   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.586673   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.587168   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.588572   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:27.584705   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.585249   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.586673   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.587168   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.588572   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:30.094042   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:30.104609   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:30.104685   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:30.131255   57716 cri.go:89] found id: ""
	I1210 05:57:30.131270   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.131277   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:30.131283   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:30.131348   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:30.160477   57716 cri.go:89] found id: ""
	I1210 05:57:30.160491   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.160498   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:30.160503   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:30.160562   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:30.186824   57716 cri.go:89] found id: ""
	I1210 05:57:30.186837   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.186845   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:30.186850   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:30.186910   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:30.212870   57716 cri.go:89] found id: ""
	I1210 05:57:30.212885   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.212892   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:30.212899   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:30.212957   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:30.238085   57716 cri.go:89] found id: ""
	I1210 05:57:30.238098   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.238105   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:30.238111   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:30.238169   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:30.264614   57716 cri.go:89] found id: ""
	I1210 05:57:30.264628   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.264635   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:30.264641   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:30.264697   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:30.292801   57716 cri.go:89] found id: ""
	I1210 05:57:30.292816   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.292823   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:30.292831   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:30.292841   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:30.324527   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:30.324543   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:30.382130   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:30.382156   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:30.392903   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:30.392921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:30.479224   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:30.470442   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.471725   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.473752   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.474178   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.475815   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:30.470442   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.471725   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.473752   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.474178   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.475815   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:30.479235   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:30.479257   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:33.043979   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:33.054086   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:33.054144   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:33.079719   57716 cri.go:89] found id: ""
	I1210 05:57:33.079733   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.079740   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:33.079746   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:33.079804   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:33.109000   57716 cri.go:89] found id: ""
	I1210 05:57:33.109013   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.109020   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:33.109026   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:33.109083   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:33.134184   57716 cri.go:89] found id: ""
	I1210 05:57:33.134198   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.134206   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:33.134213   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:33.134275   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:33.158142   57716 cri.go:89] found id: ""
	I1210 05:57:33.158155   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.158162   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:33.158168   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:33.158253   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:33.181293   57716 cri.go:89] found id: ""
	I1210 05:57:33.181306   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.181313   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:33.181319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:33.181376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:33.206025   57716 cri.go:89] found id: ""
	I1210 05:57:33.206040   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.206047   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:33.206052   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:33.206149   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:33.230253   57716 cri.go:89] found id: ""
	I1210 05:57:33.230267   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.230275   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:33.230283   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:33.230293   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:33.292011   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:33.292028   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:33.318004   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:33.318019   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:33.377256   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:33.377273   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:33.387928   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:33.387943   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:33.461753   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:33.453954   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.454800   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456253   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456768   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.458350   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:33.453954   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.454800   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456253   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456768   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.458350   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:35.962013   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:35.972548   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:35.972622   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:36.000855   57716 cri.go:89] found id: ""
	I1210 05:57:36.000870   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.000880   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:36.000900   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:36.000977   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:36.029136   57716 cri.go:89] found id: ""
	I1210 05:57:36.029151   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.029158   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:36.029164   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:36.029228   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:36.054512   57716 cri.go:89] found id: ""
	I1210 05:57:36.054525   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.054533   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:36.054538   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:36.054597   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:36.080508   57716 cri.go:89] found id: ""
	I1210 05:57:36.080522   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.080529   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:36.080535   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:36.080594   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:36.108590   57716 cri.go:89] found id: ""
	I1210 05:57:36.108604   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.108611   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:36.108616   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:36.108684   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:36.137690   57716 cri.go:89] found id: ""
	I1210 05:57:36.137704   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.137711   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:36.137716   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:36.137777   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:36.164307   57716 cri.go:89] found id: ""
	I1210 05:57:36.164321   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.164328   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:36.164335   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:36.164345   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:36.219816   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:36.219833   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:36.231171   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:36.231187   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:36.294059   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:36.285785   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.286547   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288109   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288462   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.290084   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:36.285785   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.286547   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288109   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288462   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.290084   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:36.294068   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:36.294078   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:36.358593   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:36.358612   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:38.888296   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:38.898447   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:38.898505   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:38.925123   57716 cri.go:89] found id: ""
	I1210 05:57:38.925137   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.925144   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:38.925150   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:38.925210   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:38.949713   57716 cri.go:89] found id: ""
	I1210 05:57:38.949727   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.949734   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:38.949739   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:38.949797   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:38.974867   57716 cri.go:89] found id: ""
	I1210 05:57:38.974881   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.974888   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:38.974893   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:38.974949   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:39.008214   57716 cri.go:89] found id: ""
	I1210 05:57:39.008228   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.008235   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:39.008240   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:39.008300   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:39.033316   57716 cri.go:89] found id: ""
	I1210 05:57:39.033330   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.033342   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:39.033347   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:39.033405   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:39.057634   57716 cri.go:89] found id: ""
	I1210 05:57:39.057648   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.057655   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:39.057660   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:39.057719   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:39.082101   57716 cri.go:89] found id: ""
	I1210 05:57:39.082115   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.082125   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:39.082133   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:39.082143   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:39.144897   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:39.137033   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.137582   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139164   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139565   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.141172   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:39.137033   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.137582   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139164   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139565   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.141172   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:39.144907   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:39.144920   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:39.209520   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:39.209538   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:39.239106   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:39.239121   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:39.294711   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:39.294728   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:41.805411   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:41.814952   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:41.815027   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:41.838919   57716 cri.go:89] found id: ""
	I1210 05:57:41.838933   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.838940   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:41.838946   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:41.839004   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:41.865368   57716 cri.go:89] found id: ""
	I1210 05:57:41.865382   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.865389   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:41.865394   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:41.865452   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:41.889411   57716 cri.go:89] found id: ""
	I1210 05:57:41.889424   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.889431   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:41.889436   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:41.889521   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:41.915079   57716 cri.go:89] found id: ""
	I1210 05:57:41.915093   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.915101   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:41.915110   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:41.915173   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:41.940274   57716 cri.go:89] found id: ""
	I1210 05:57:41.940288   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.940295   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:41.940301   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:41.940360   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:41.969301   57716 cri.go:89] found id: ""
	I1210 05:57:41.969314   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.969321   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:41.969329   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:41.969387   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:41.993086   57716 cri.go:89] found id: ""
	I1210 05:57:41.993100   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.993108   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:41.993116   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:41.993127   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:42.006335   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:42.006357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:42.077276   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:42.067659   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.069125   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.070001   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071203   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071880   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:42.067659   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.069125   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.070001   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071203   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071880   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:42.077290   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:42.077302   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:42.143212   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:42.143248   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:42.179140   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:42.179158   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:44.752413   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:44.762150   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:44.762207   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:44.791897   57716 cri.go:89] found id: ""
	I1210 05:57:44.791911   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.791918   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:44.791924   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:44.791983   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:44.815813   57716 cri.go:89] found id: ""
	I1210 05:57:44.815827   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.815834   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:44.815839   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:44.815894   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:44.839318   57716 cri.go:89] found id: ""
	I1210 05:57:44.839331   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.839337   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:44.839342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:44.839399   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:44.866822   57716 cri.go:89] found id: ""
	I1210 05:57:44.866835   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.866842   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:44.866848   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:44.866904   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:44.892455   57716 cri.go:89] found id: ""
	I1210 05:57:44.892469   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.892476   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:44.892481   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:44.892536   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:44.920574   57716 cri.go:89] found id: ""
	I1210 05:57:44.920588   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.920596   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:44.920602   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:44.920663   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:44.947951   57716 cri.go:89] found id: ""
	I1210 05:57:44.947965   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.947971   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:44.947979   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:44.947988   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:45.005480   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:45.005501   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:45.022560   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:45.022578   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:45.142523   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:45.129527   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.130054   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.132621   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.134289   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.135580   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:45.129527   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.130054   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.132621   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.134289   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.135580   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:45.142534   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:45.142550   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:45.216088   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:45.216135   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:47.759715   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:47.769555   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:47.769615   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:47.793943   57716 cri.go:89] found id: ""
	I1210 05:57:47.793957   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.793964   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:47.793969   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:47.794039   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:47.818334   57716 cri.go:89] found id: ""
	I1210 05:57:47.818348   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.818355   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:47.818360   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:47.818417   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:47.842582   57716 cri.go:89] found id: ""
	I1210 05:57:47.842599   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.842617   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:47.842623   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:47.842689   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:47.868471   57716 cri.go:89] found id: ""
	I1210 05:57:47.868485   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.868492   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:47.868498   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:47.868559   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:47.897381   57716 cri.go:89] found id: ""
	I1210 05:57:47.897394   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.897401   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:47.897416   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:47.897473   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:47.920386   57716 cri.go:89] found id: ""
	I1210 05:57:47.920400   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.920407   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:47.920412   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:47.920474   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:47.947866   57716 cri.go:89] found id: ""
	I1210 05:57:47.947879   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.947886   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:47.947894   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:47.947904   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:48.008844   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:48.008863   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:48.038885   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:48.038903   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:48.095592   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:48.095610   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:48.107140   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:48.107155   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:48.171340   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:48.162734   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.163476   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165210   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165663   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.167242   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:48.162734   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.163476   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165210   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165663   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.167242   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:50.672091   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:50.683391   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:50.683451   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:50.711296   57716 cri.go:89] found id: ""
	I1210 05:57:50.711311   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.711319   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:50.711327   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:50.711382   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:50.740763   57716 cri.go:89] found id: ""
	I1210 05:57:50.740777   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.740785   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:50.740790   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:50.740853   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:50.772079   57716 cri.go:89] found id: ""
	I1210 05:57:50.772093   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.772111   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:50.772117   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:50.772184   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:50.800962   57716 cri.go:89] found id: ""
	I1210 05:57:50.800975   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.800982   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:50.800988   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:50.801044   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:50.825974   57716 cri.go:89] found id: ""
	I1210 05:57:50.825993   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.826000   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:50.826005   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:50.826061   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:50.854343   57716 cri.go:89] found id: ""
	I1210 05:57:50.854356   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.854364   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:50.854369   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:50.854426   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:50.878560   57716 cri.go:89] found id: ""
	I1210 05:57:50.878573   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.878581   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:50.878599   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:50.878609   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:50.906006   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:50.906022   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:50.961851   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:50.961869   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:50.973152   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:50.973171   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:51.044678   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:51.036912   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.037431   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039082   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039573   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.041151   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:51.036912   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.037431   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039082   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039573   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.041151   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:51.044689   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:51.044699   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:53.606481   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:53.616567   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:53.616625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:53.641012   57716 cri.go:89] found id: ""
	I1210 05:57:53.641025   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.641031   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:53.641037   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:53.641092   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:53.673275   57716 cri.go:89] found id: ""
	I1210 05:57:53.673290   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.673307   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:53.673313   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:53.673369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:53.709276   57716 cri.go:89] found id: ""
	I1210 05:57:53.709291   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.709298   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:53.709302   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:53.709369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:53.739332   57716 cri.go:89] found id: ""
	I1210 05:57:53.739346   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.739353   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:53.739358   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:53.739415   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:53.764637   57716 cri.go:89] found id: ""
	I1210 05:57:53.764650   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.764657   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:53.764662   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:53.764717   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:53.793424   57716 cri.go:89] found id: ""
	I1210 05:57:53.793438   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.793446   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:53.793451   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:53.793514   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:53.823828   57716 cri.go:89] found id: ""
	I1210 05:57:53.823842   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.823849   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:53.823857   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:53.823868   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:53.834565   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:53.834583   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:53.898035   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:53.890056   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.890844   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.892495   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.893000   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.894620   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:53.890056   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.890844   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.892495   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.893000   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.894620   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:53.898052   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:53.898063   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:53.960027   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:53.960044   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:53.988584   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:53.988600   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:56.551892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:56.562044   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:56.562109   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:56.587872   57716 cri.go:89] found id: ""
	I1210 05:57:56.587889   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.587897   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:56.587902   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:56.587967   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:56.613907   57716 cri.go:89] found id: ""
	I1210 05:57:56.613920   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.613927   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:56.613932   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:56.613988   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:56.638685   57716 cri.go:89] found id: ""
	I1210 05:57:56.638699   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.638706   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:56.638711   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:56.638768   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:56.665211   57716 cri.go:89] found id: ""
	I1210 05:57:56.665225   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.665232   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:56.665237   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:56.665295   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:56.696149   57716 cri.go:89] found id: ""
	I1210 05:57:56.696163   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.696169   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:56.696174   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:56.696231   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:56.728016   57716 cri.go:89] found id: ""
	I1210 05:57:56.728029   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.728036   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:56.728042   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:56.728104   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:56.752871   57716 cri.go:89] found id: ""
	I1210 05:57:56.752886   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.752894   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:56.752901   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:56.752913   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:56.783267   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:56.783283   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:56.842023   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:56.842046   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:56.853533   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:56.853549   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:56.914976   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:56.907206   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.907987   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909541   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909854   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.911455   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:56.907206   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.907987   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909541   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909854   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.911455   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:56.914988   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:56.915000   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:59.477082   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:59.487185   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:59.487242   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:59.511535   57716 cri.go:89] found id: ""
	I1210 05:57:59.511549   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.511556   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:59.511562   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:59.511639   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:59.536235   57716 cri.go:89] found id: ""
	I1210 05:57:59.536249   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.536265   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:59.536271   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:59.536329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:59.560801   57716 cri.go:89] found id: ""
	I1210 05:57:59.560815   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.560821   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:59.560827   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:59.560890   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:59.586232   57716 cri.go:89] found id: ""
	I1210 05:57:59.586247   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.586273   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:59.586279   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:59.586343   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:59.610087   57716 cri.go:89] found id: ""
	I1210 05:57:59.610101   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.610108   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:59.610113   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:59.610170   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:59.634249   57716 cri.go:89] found id: ""
	I1210 05:57:59.634263   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.634270   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:59.634275   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:59.634333   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:59.659066   57716 cri.go:89] found id: ""
	I1210 05:57:59.659100   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.659106   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:59.659115   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:59.659125   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:59.670606   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:59.670622   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:59.744825   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:59.737528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.737938   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739905   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.741403   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:59.737528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.737938   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739905   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.741403   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:59.744835   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:59.744847   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:59.806075   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:59.806092   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:59.841753   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:59.841769   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:02.400095   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:02.410925   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:02.410999   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:02.435337   57716 cri.go:89] found id: ""
	I1210 05:58:02.435351   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.435358   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:02.435363   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:02.435421   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:02.459273   57716 cri.go:89] found id: ""
	I1210 05:58:02.459287   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.459294   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:02.459299   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:02.459369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:02.484838   57716 cri.go:89] found id: ""
	I1210 05:58:02.484859   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.484867   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:02.484872   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:02.484930   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:02.513703   57716 cri.go:89] found id: ""
	I1210 05:58:02.513718   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.513732   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:02.513738   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:02.513799   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:02.537442   57716 cri.go:89] found id: ""
	I1210 05:58:02.537456   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.537472   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:02.537478   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:02.537538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:02.562811   57716 cri.go:89] found id: ""
	I1210 05:58:02.562824   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.562831   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:02.562837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:02.562904   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:02.593233   57716 cri.go:89] found id: ""
	I1210 05:58:02.593247   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.593254   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:02.593263   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:02.593283   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:02.649484   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:02.649502   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:02.668256   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:02.668270   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:02.746961   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:02.738936   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.739525   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741090   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741618   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.743247   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:02.738936   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.739525   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741090   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741618   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.743247   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:02.746984   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:02.746995   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:02.810434   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:02.810451   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:05.338812   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:05.348929   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:05.349015   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:05.376460   57716 cri.go:89] found id: ""
	I1210 05:58:05.376474   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.376481   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:05.376486   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:05.376545   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:05.401572   57716 cri.go:89] found id: ""
	I1210 05:58:05.401585   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.401593   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:05.401598   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:05.401657   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:05.426804   57716 cri.go:89] found id: ""
	I1210 05:58:05.426820   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.426827   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:05.426832   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:05.426889   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:05.450557   57716 cri.go:89] found id: ""
	I1210 05:58:05.450570   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.450577   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:05.450583   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:05.450640   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:05.476587   57716 cri.go:89] found id: ""
	I1210 05:58:05.476601   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.476607   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:05.476612   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:05.476669   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:05.501716   57716 cri.go:89] found id: ""
	I1210 05:58:05.501730   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.501736   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:05.501742   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:05.501801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:05.526971   57716 cri.go:89] found id: ""
	I1210 05:58:05.526985   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.526992   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:05.527000   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:05.527050   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:05.585508   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:05.585527   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:05.596526   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:05.596542   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:05.661377   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:05.650856   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.651411   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653229   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653833   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.655530   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:05.650856   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.651411   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653229   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653833   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.655530   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:05.661388   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:05.661398   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:05.732863   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:05.732882   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:08.260047   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:08.270586   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:08.270648   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:08.298955   57716 cri.go:89] found id: ""
	I1210 05:58:08.298984   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.298992   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:08.298997   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:08.299088   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:08.326321   57716 cri.go:89] found id: ""
	I1210 05:58:08.326335   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.326342   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:08.326347   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:08.326410   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:08.350063   57716 cri.go:89] found id: ""
	I1210 05:58:08.350077   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.350095   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:08.350100   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:08.350157   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:08.374459   57716 cri.go:89] found id: ""
	I1210 05:58:08.374472   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.374480   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:08.374485   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:08.374549   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:08.398594   57716 cri.go:89] found id: ""
	I1210 05:58:08.398608   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.398615   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:08.398629   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:08.398685   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:08.423334   57716 cri.go:89] found id: ""
	I1210 05:58:08.423348   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.423355   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:08.423366   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:08.423424   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:08.448137   57716 cri.go:89] found id: ""
	I1210 05:58:08.448150   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.448157   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:08.448164   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:08.448175   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:08.510732   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:08.502942   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.503736   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505412   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505743   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.507339   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:08.502942   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.503736   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505412   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505743   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.507339   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:08.510751   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:08.510764   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:08.572194   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:08.572211   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:08.600446   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:08.600463   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:08.657452   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:08.657469   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:11.170762   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:11.180886   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:11.180951   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:11.205555   57716 cri.go:89] found id: ""
	I1210 05:58:11.205569   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.205584   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:11.205590   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:11.205664   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:11.233080   57716 cri.go:89] found id: ""
	I1210 05:58:11.233094   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.233101   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:11.233106   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:11.233164   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:11.257793   57716 cri.go:89] found id: ""
	I1210 05:58:11.257807   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.257814   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:11.257821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:11.257879   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:11.282030   57716 cri.go:89] found id: ""
	I1210 05:58:11.282042   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.282050   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:11.282055   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:11.282119   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:11.305111   57716 cri.go:89] found id: ""
	I1210 05:58:11.305125   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.305132   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:11.305138   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:11.305196   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:11.329236   57716 cri.go:89] found id: ""
	I1210 05:58:11.329250   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.329257   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:11.329264   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:11.329320   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:11.354605   57716 cri.go:89] found id: ""
	I1210 05:58:11.354620   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.354627   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:11.354635   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:11.354645   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:11.386130   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:11.386146   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:11.444254   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:11.444272   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:11.455429   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:11.455446   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:11.522092   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:11.513675   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.514592   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516162   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516767   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.518501   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:11.513675   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.514592   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516162   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516767   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.518501   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:11.522102   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:11.522112   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:14.084603   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:14.094719   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:14.094779   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:14.118507   57716 cri.go:89] found id: ""
	I1210 05:58:14.118520   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.118528   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:14.118533   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:14.118588   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:14.144079   57716 cri.go:89] found id: ""
	I1210 05:58:14.144093   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.144100   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:14.144105   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:14.144166   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:14.174736   57716 cri.go:89] found id: ""
	I1210 05:58:14.174750   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.174757   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:14.174762   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:14.174837   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:14.199688   57716 cri.go:89] found id: ""
	I1210 05:58:14.199709   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.199727   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:14.199733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:14.199789   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:14.227765   57716 cri.go:89] found id: ""
	I1210 05:58:14.227779   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.227786   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:14.227793   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:14.227853   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:14.256531   57716 cri.go:89] found id: ""
	I1210 05:58:14.256546   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.256554   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:14.256559   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:14.256628   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:14.281035   57716 cri.go:89] found id: ""
	I1210 05:58:14.281054   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.281062   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:14.281070   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:14.281082   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:14.307632   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:14.307647   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:14.363636   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:14.363655   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:14.374356   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:14.374372   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:14.439204   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:14.431102   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.431970   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.433831   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.434179   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.435681   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:14.431102   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.431970   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.433831   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.434179   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.435681   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:14.439214   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:14.439227   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:17.000609   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:17.011094   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:17.011152   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:17.034914   57716 cri.go:89] found id: ""
	I1210 05:58:17.034928   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.034935   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:17.034940   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:17.034997   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:17.059216   57716 cri.go:89] found id: ""
	I1210 05:58:17.059229   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.059236   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:17.059241   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:17.059297   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:17.084654   57716 cri.go:89] found id: ""
	I1210 05:58:17.084667   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.084674   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:17.084679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:17.084734   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:17.108452   57716 cri.go:89] found id: ""
	I1210 05:58:17.108465   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.108472   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:17.108477   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:17.108538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:17.131638   57716 cri.go:89] found id: ""
	I1210 05:58:17.131652   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.131660   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:17.131666   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:17.131724   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:17.157073   57716 cri.go:89] found id: ""
	I1210 05:58:17.157086   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.157093   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:17.157099   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:17.157155   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:17.181834   57716 cri.go:89] found id: ""
	I1210 05:58:17.181849   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.181856   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:17.181864   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:17.181874   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:17.237484   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:17.237500   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:17.248803   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:17.248818   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:17.312123   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:17.304256   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.304922   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.306601   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.307186   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.308722   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:17.304256   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.304922   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.306601   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.307186   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.308722   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:17.312135   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:17.312145   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:17.375552   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:17.375570   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:19.903470   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:19.915506   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:19.915564   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:19.947745   57716 cri.go:89] found id: ""
	I1210 05:58:19.947758   57716 logs.go:282] 0 containers: []
	W1210 05:58:19.947765   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:19.947771   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:19.947832   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:19.980662   57716 cri.go:89] found id: ""
	I1210 05:58:19.980676   57716 logs.go:282] 0 containers: []
	W1210 05:58:19.980683   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:19.980688   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:19.980746   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:20.014764   57716 cri.go:89] found id: ""
	I1210 05:58:20.014787   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.014795   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:20.014801   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:20.014868   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:20.043079   57716 cri.go:89] found id: ""
	I1210 05:58:20.043093   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.043100   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:20.043106   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:20.043168   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:20.071694   57716 cri.go:89] found id: ""
	I1210 05:58:20.071709   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.071717   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:20.071722   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:20.071785   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:20.097931   57716 cri.go:89] found id: ""
	I1210 05:58:20.097945   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.097952   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:20.097958   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:20.098028   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:20.122795   57716 cri.go:89] found id: ""
	I1210 05:58:20.122809   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.122816   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:20.122824   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:20.122835   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:20.133825   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:20.133840   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:20.194901   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:20.186552   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.187444   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189151   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189874   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.191519   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:20.186552   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.187444   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189151   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189874   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.191519   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:20.194911   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:20.194921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:20.256875   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:20.256894   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:20.283841   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:20.283857   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:22.843646   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:22.853725   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:22.853782   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:22.878310   57716 cri.go:89] found id: ""
	I1210 05:58:22.878325   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.878332   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:22.878336   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:22.878393   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:22.902470   57716 cri.go:89] found id: ""
	I1210 05:58:22.902483   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.902490   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:22.902495   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:22.902552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:22.929428   57716 cri.go:89] found id: ""
	I1210 05:58:22.929442   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.929449   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:22.929454   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:22.929512   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:22.962201   57716 cri.go:89] found id: ""
	I1210 05:58:22.962215   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.962222   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:22.962227   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:22.962286   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:22.988315   57716 cri.go:89] found id: ""
	I1210 05:58:22.988329   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.988336   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:22.988341   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:22.988397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:23.015788   57716 cri.go:89] found id: ""
	I1210 05:58:23.015801   57716 logs.go:282] 0 containers: []
	W1210 05:58:23.015818   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:23.015824   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:23.015895   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:23.040476   57716 cri.go:89] found id: ""
	I1210 05:58:23.040490   57716 logs.go:282] 0 containers: []
	W1210 05:58:23.040497   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:23.040505   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:23.040515   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:23.097263   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:23.097281   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:23.108339   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:23.108357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:23.174372   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:23.166022   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.166801   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168382   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168890   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.170644   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:23.166022   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.166801   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168382   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168890   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.170644   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:23.174382   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:23.174393   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:23.238417   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:23.238433   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:25.767502   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:25.777560   57716 kubeadm.go:602] duration metric: took 4m3.698254406s to restartPrimaryControlPlane
	W1210 05:58:25.777622   57716 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 05:58:25.777697   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 05:58:26.181572   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:58:26.194845   57716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:58:26.202430   57716 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:58:26.202489   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:58:26.210414   57716 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:58:26.210423   57716 kubeadm.go:158] found existing configuration files:
	
	I1210 05:58:26.210474   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:58:26.218226   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:58:26.218281   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:58:26.225499   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:58:26.233426   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:58:26.233479   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:58:26.240639   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:58:26.247882   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:58:26.247936   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:58:26.255235   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:58:26.263002   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:58:26.263069   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:58:26.270271   57716 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:58:26.308640   57716 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 05:58:26.308937   57716 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:58:26.373888   57716 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:58:26.373948   57716 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 05:58:26.373980   57716 kubeadm.go:319] OS: Linux
	I1210 05:58:26.374022   57716 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:58:26.374069   57716 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 05:58:26.374113   57716 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:58:26.374157   57716 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:58:26.374200   57716 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:58:26.374244   57716 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:58:26.374300   57716 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:58:26.374343   57716 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:58:26.374385   57716 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 05:58:26.445771   57716 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:58:26.445880   57716 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:58:26.445970   57716 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:58:26.455518   57716 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:58:26.460828   57716 out.go:252]   - Generating certificates and keys ...
	I1210 05:58:26.460930   57716 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:58:26.461006   57716 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:58:26.461110   57716 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 05:58:26.461178   57716 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 05:58:26.461260   57716 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 05:58:26.461325   57716 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 05:58:26.461413   57716 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 05:58:26.461483   57716 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 05:58:26.461565   57716 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 05:58:26.461644   57716 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 05:58:26.461682   57716 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 05:58:26.461743   57716 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:58:26.520044   57716 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:58:27.005643   57716 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:58:27.519831   57716 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:58:27.780223   57716 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:58:28.060883   57716 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:58:28.061559   57716 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:58:28.064834   57716 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:58:28.067981   57716 out.go:252]   - Booting up control plane ...
	I1210 05:58:28.068070   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:58:28.068143   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:58:28.069383   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:58:28.090093   57716 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:58:28.090188   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:58:28.097949   57716 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:58:28.098042   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:58:28.098080   57716 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:58:28.241595   57716 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:58:28.241705   57716 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:02:28.236858   57716 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00011534s
	I1210 06:02:28.236887   57716 kubeadm.go:319] 
	I1210 06:02:28.236942   57716 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:02:28.236986   57716 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:02:28.237128   57716 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:02:28.237135   57716 kubeadm.go:319] 
	I1210 06:02:28.237233   57716 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:02:28.237262   57716 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:02:28.237291   57716 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:02:28.237295   57716 kubeadm.go:319] 
	I1210 06:02:28.241711   57716 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:02:28.242149   57716 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:02:28.242254   57716 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:02:28.242529   57716 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:02:28.242535   57716 kubeadm.go:319] 
	I1210 06:02:28.242598   57716 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:02:28.242730   57716 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00011534s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:02:28.242815   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:02:28.653276   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:02:28.666846   57716 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:02:28.666902   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:02:28.676196   57716 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:02:28.676206   57716 kubeadm.go:158] found existing configuration files:
	
	I1210 06:02:28.676262   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:02:28.683929   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:02:28.683984   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:02:28.691531   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:02:28.699193   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:02:28.699247   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:02:28.706499   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:02:28.713695   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:02:28.713761   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:02:28.721311   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:02:28.729191   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:02:28.729245   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:02:28.737059   57716 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:02:28.777392   57716 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:02:28.777754   57716 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:02:28.849302   57716 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:02:28.849368   57716 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:02:28.849403   57716 kubeadm.go:319] OS: Linux
	I1210 06:02:28.849460   57716 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:02:28.849508   57716 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:02:28.849555   57716 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:02:28.849602   57716 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:02:28.849649   57716 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:02:28.849696   57716 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:02:28.849745   57716 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:02:28.849792   57716 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:02:28.849837   57716 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:02:28.921564   57716 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:02:28.921662   57716 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:02:28.921748   57716 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:02:28.926509   57716 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:02:28.929904   57716 out.go:252]   - Generating certificates and keys ...
	I1210 06:02:28.929994   57716 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:02:28.930057   57716 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:02:28.930131   57716 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:02:28.930201   57716 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:02:28.930270   57716 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:02:28.930322   57716 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:02:28.930384   57716 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:02:28.930444   57716 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:02:28.930517   57716 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:02:28.930589   57716 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:02:28.930766   57716 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:02:28.930854   57716 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:02:29.206630   57716 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:02:29.720612   57716 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:02:29.887413   57716 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:02:30.011857   57716 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:02:30.197709   57716 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:02:30.198347   57716 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:02:30.201006   57716 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:02:30.204123   57716 out.go:252]   - Booting up control plane ...
	I1210 06:02:30.204220   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:02:30.204296   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:02:30.204794   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:02:30.227311   57716 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:02:30.227437   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:02:30.235547   57716 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:02:30.235634   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:02:30.235945   57716 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:02:30.373162   57716 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:02:30.373269   57716 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:06:30.371537   57716 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000138118s
	I1210 06:06:30.371561   57716 kubeadm.go:319] 
	I1210 06:06:30.371641   57716 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:06:30.371685   57716 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:06:30.371790   57716 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:06:30.371795   57716 kubeadm.go:319] 
	I1210 06:06:30.371898   57716 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:06:30.371929   57716 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:06:30.371959   57716 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:06:30.371962   57716 kubeadm.go:319] 
	I1210 06:06:30.376139   57716 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:06:30.376577   57716 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:06:30.376687   57716 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:06:30.376961   57716 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:06:30.376966   57716 kubeadm.go:319] 
	I1210 06:06:30.377035   57716 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:06:30.377094   57716 kubeadm.go:403] duration metric: took 12m8.33567442s to StartCluster
	I1210 06:06:30.377125   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:06:30.377187   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:06:30.401132   57716 cri.go:89] found id: ""
	I1210 06:06:30.401147   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.401154   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:30.401160   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:06:30.401219   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:06:30.437615   57716 cri.go:89] found id: ""
	I1210 06:06:30.437630   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.437637   57716 logs.go:284] No container was found matching "etcd"
	I1210 06:06:30.437642   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:06:30.437699   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:06:30.462667   57716 cri.go:89] found id: ""
	I1210 06:06:30.462681   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.462688   57716 logs.go:284] No container was found matching "coredns"
	I1210 06:06:30.462693   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:06:30.462752   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:06:30.491407   57716 cri.go:89] found id: ""
	I1210 06:06:30.491420   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.491428   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:30.491433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:06:30.491493   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:06:30.516073   57716 cri.go:89] found id: ""
	I1210 06:06:30.516086   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.516092   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:30.516098   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:06:30.516154   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:06:30.540636   57716 cri.go:89] found id: ""
	I1210 06:06:30.540649   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.540656   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:30.540679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:06:30.540736   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:06:30.565548   57716 cri.go:89] found id: ""
	I1210 06:06:30.565570   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.565578   57716 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:30.565586   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:30.565596   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:30.620548   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:30.620565   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:30.631284   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:30.631299   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:30.692450   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:30.684857   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.685223   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.686724   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.687076   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.688628   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:30.684857   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.685223   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.686724   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.687076   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.688628   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:30.692461   57716 logs.go:123] Gathering logs for containerd ...
	I1210 06:06:30.692471   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:06:30.755422   57716 logs.go:123] Gathering logs for container status ...
	I1210 06:06:30.755444   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 06:06:30.784033   57716 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:06:30.784067   57716 out.go:285] * 
	W1210 06:06:30.784157   57716 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:06:30.784176   57716 out.go:285] * 
	W1210 06:06:30.786468   57716 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:06:30.793223   57716 out.go:203] 
	W1210 06:06:30.796021   57716 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:06:30.796079   57716 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:06:30.796099   57716 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:06:30.799180   57716 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477949649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477963918Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477995246Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478012321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478021774Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478031620Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478040424Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478051649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478070291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478098854Z" level=info msg="Connect containerd service"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478383782Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478960226Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.497963642Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498025206Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498057067Z" level=info msg="Start subscribing containerd event"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498101696Z" level=info msg="Start recovering state"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526273092Z" level=info msg="Start event monitor"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526463774Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526536103Z" level=info msg="Start streaming server"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526593630Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526675700Z" level=info msg="runtime interface starting up..."
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526739774Z" level=info msg="starting plugins..."
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526805581Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 05:54:20 functional-644034 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.528842308Z" level=info msg="containerd successfully booted in 0.071400s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:34.227211   21779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:34.228059   21779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:34.229685   21779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:34.230198   21779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:34.231810   21779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 06:06:34 up 49 min,  0 user,  load average: 0.42, 0.22, 0.38
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:06:31 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:06:31 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 06:06:31 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:31 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:31 functional-644034 kubelet[21623]: E1210 06:06:31.962273   21623 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:06:31 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:06:31 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:06:32 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 10 06:06:32 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:32 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:32 functional-644034 kubelet[21659]: E1210 06:06:32.743576   21659 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:06:32 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:06:32 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:06:33 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 10 06:06:33 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:33 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:33 functional-644034 kubelet[21695]: E1210 06:06:33.471305   21695 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:06:33 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:06:33 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:06:34 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 10 06:06:34 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:34 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:06:34 functional-644034 kubelet[21783]: E1210 06:06:34.223939   21783 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:06:34 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:06:34 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (343.783389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-644034 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-644034 apply -f testdata/invalidsvc.yaml: exit status 1 (60.64189ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-644034 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-644034 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-644034 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-644034 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-644034 --alsologtostderr -v=1] stderr:
I1210 06:08:45.665417   75147 out.go:360] Setting OutFile to fd 1 ...
I1210 06:08:45.665536   75147 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:08:45.665547   75147 out.go:374] Setting ErrFile to fd 2...
I1210 06:08:45.665553   75147 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:08:45.665813   75147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 06:08:45.666076   75147 mustload.go:66] Loading cluster: functional-644034
I1210 06:08:45.666500   75147 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:08:45.667001   75147 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
I1210 06:08:45.683406   75147 host.go:66] Checking if "functional-644034" exists ...
I1210 06:08:45.683745   75147 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 06:08:45.746497   75147 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:08:45.737125093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 06:08:45.746623   75147 api_server.go:166] Checking apiserver status ...
I1210 06:08:45.746690   75147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 06:08:45.746732   75147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
I1210 06:08:45.764110   75147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
W1210 06:08:45.868474   75147 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 06:08:45.871641   75147 out.go:179] * The control-plane node functional-644034 apiserver is not running: (state=Stopped)
I1210 06:08:45.874883   75147 out.go:179]   To start a cluster, run: "minikube start -p functional-644034"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 2 (313.407194ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-644034 service hello-node --url                                                                                                          │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ mount     │ -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001:/mount-9p --alsologtostderr -v=1               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh       │ functional-644034 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh       │ functional-644034 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh       │ functional-644034 ssh -- ls -la /mount-9p                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh       │ functional-644034 ssh cat /mount-9p/test-1765346915509669717                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh       │ functional-644034 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh       │ functional-644034 ssh sudo umount -f /mount-9p                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ mount     │ -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3916605591/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh       │ functional-644034 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh       │ functional-644034 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh       │ functional-644034 ssh -- ls -la /mount-9p                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh       │ functional-644034 ssh sudo umount -f /mount-9p                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ mount     │ -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount2 --alsologtostderr -v=1                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ mount     │ -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount3 --alsologtostderr -v=1                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ mount     │ -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount1 --alsologtostderr -v=1                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh       │ functional-644034 ssh findmnt -T /mount1                                                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh       │ functional-644034 ssh findmnt -T /mount1                                                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh       │ functional-644034 ssh findmnt -T /mount2                                                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh       │ functional-644034 ssh findmnt -T /mount3                                                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ mount     │ -p functional-644034 --kill=true                                                                                                                    │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ start     │ -p functional-644034 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1   │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ start     │ -p functional-644034 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1   │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ start     │ -p functional-644034 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1             │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-644034 --alsologtostderr -v=1                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:08:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:08:45.411458   75070 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:08:45.411572   75070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:08:45.411578   75070 out.go:374] Setting ErrFile to fd 2...
	I1210 06:08:45.411583   75070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:08:45.411858   75070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:08:45.412318   75070 out.go:368] Setting JSON to false
	I1210 06:08:45.413062   75070 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3076,"bootTime":1765343850,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:08:45.413124   75070 start.go:143] virtualization:  
	I1210 06:08:45.416311   75070 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:08:45.420093   75070 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:08:45.420262   75070 notify.go:221] Checking for updates...
	I1210 06:08:45.426058   75070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:08:45.428921   75070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:08:45.431634   75070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:08:45.435298   75070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:08:45.438128   75070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:08:45.441516   75070 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:08:45.442087   75070 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:08:45.475268   75070 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:08:45.475386   75070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:08:45.544088   75070 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:08:45.534810687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:08:45.544195   75070 docker.go:319] overlay module found
	I1210 06:08:45.547299   75070 out.go:179] * Using the docker driver based on existing profile
	I1210 06:08:45.550158   75070 start.go:309] selected driver: docker
	I1210 06:08:45.550174   75070 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:08:45.550288   75070 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:08:45.550407   75070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:08:45.603255   75070 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:08:45.594348639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:08:45.603659   75070 cni.go:84] Creating CNI manager for ""
	I1210 06:08:45.603722   75070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:08:45.603784   75070 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:08:45.606777   75070 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477949649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477963918Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477995246Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478012321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478021774Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478031620Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478040424Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478051649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478070291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478098854Z" level=info msg="Connect containerd service"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478383782Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478960226Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.497963642Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498025206Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498057067Z" level=info msg="Start subscribing containerd event"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498101696Z" level=info msg="Start recovering state"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526273092Z" level=info msg="Start event monitor"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526463774Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526536103Z" level=info msg="Start streaming server"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526593630Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526675700Z" level=info msg="runtime interface starting up..."
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526739774Z" level=info msg="starting plugins..."
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526805581Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 05:54:20 functional-644034 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.528842308Z" level=info msg="containerd successfully booted in 0.071400s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:46.892756   23912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:46.893271   23912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:46.894742   23912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:46.895284   23912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:46.896834   23912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 06:08:46 up 51 min,  0 user,  load average: 0.55, 0.30, 0.38
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:08:43 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:44 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 499.
	Dec 10 06:08:44 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:44 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:44 functional-644034 kubelet[23773]: E1210 06:08:44.741620   23773 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:44 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:44 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:45 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 500.
	Dec 10 06:08:45 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:45 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:45 functional-644034 kubelet[23794]: E1210 06:08:45.472878   23794 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:45 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:45 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:46 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 501.
	Dec 10 06:08:46 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:46 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:46 functional-644034 kubelet[23811]: E1210 06:08:46.206389   23811 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:46 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:46 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:46 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 502.
	Dec 10 06:08:46 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:46 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:46 functional-644034 kubelet[23916]: E1210 06:08:46.965323   23916 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:46 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:46 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (305.823088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 status: exit status 2 (321.748168ms)

                                                
                                                
-- stdout --
	functional-644034
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-644034 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (362.714331ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-644034 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 status -o json: exit status 2 (298.096561ms)

                                                
                                                
-- stdout --
	{"Name":"functional-644034","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-644034 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 2 (302.258849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-644034 service list                                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ service │ functional-644034 service list -o json                                                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ service │ functional-644034 service --namespace=default --https --url hello-node                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ service │ functional-644034 service hello-node --url --format={{.IP}}                                                                                         │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ service │ functional-644034 service hello-node --url                                                                                                          │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ mount   │ -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001:/mount-9p --alsologtostderr -v=1               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh     │ functional-644034 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh     │ functional-644034 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh     │ functional-644034 ssh -- ls -la /mount-9p                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh     │ functional-644034 ssh cat /mount-9p/test-1765346915509669717                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh     │ functional-644034 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh     │ functional-644034 ssh sudo umount -f /mount-9p                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ mount   │ -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3916605591/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh     │ functional-644034 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh     │ functional-644034 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh     │ functional-644034 ssh -- ls -la /mount-9p                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh     │ functional-644034 ssh sudo umount -f /mount-9p                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ mount   │ -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount2 --alsologtostderr -v=1                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ mount   │ -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount3 --alsologtostderr -v=1                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ mount   │ -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount1 --alsologtostderr -v=1                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh     │ functional-644034 ssh findmnt -T /mount1                                                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh     │ functional-644034 ssh findmnt -T /mount1                                                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh     │ functional-644034 ssh findmnt -T /mount2                                                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh     │ functional-644034 ssh findmnt -T /mount3                                                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ mount   │ -p functional-644034 --kill=true                                                                                                                    │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:54:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:54:17.426935   57716 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:54:17.427082   57716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:17.427086   57716 out.go:374] Setting ErrFile to fd 2...
	I1210 05:54:17.427090   57716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:17.427361   57716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:54:17.427717   57716 out.go:368] Setting JSON to false
	I1210 05:54:17.428531   57716 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2208,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:54:17.428587   57716 start.go:143] virtualization:  
	I1210 05:54:17.432151   57716 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:54:17.435955   57716 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:54:17.436010   57716 notify.go:221] Checking for updates...
	I1210 05:54:17.441966   57716 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:54:17.444885   57716 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:54:17.447901   57716 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:54:17.450919   57716 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:54:17.453767   57716 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:54:17.457197   57716 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:54:17.457296   57716 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:54:17.484154   57716 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:54:17.484249   57716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:54:17.544910   57716 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 05:54:17.535741476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:54:17.545002   57716 docker.go:319] overlay module found
	I1210 05:54:17.548056   57716 out.go:179] * Using the docker driver based on existing profile
	I1210 05:54:17.550880   57716 start.go:309] selected driver: docker
	I1210 05:54:17.550888   57716 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:17.550973   57716 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:54:17.551147   57716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:54:17.606051   57716 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 05:54:17.597194445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:54:17.606475   57716 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:54:17.606497   57716 cni.go:84] Creating CNI manager for ""
	I1210 05:54:17.606551   57716 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:54:17.606592   57716 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:17.611686   57716 out.go:179] * Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	I1210 05:54:17.614501   57716 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 05:54:17.617345   57716 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:54:17.620208   57716 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:54:17.620284   57716 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:54:17.639591   57716 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:54:17.639602   57716 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 05:54:17.674108   57716 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 05:54:17.814864   57716 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 05:54:17.815057   57716 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json ...
	I1210 05:54:17.815157   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:17.815311   57716 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:54:17.815341   57716 start.go:360] acquireMachinesLock for functional-644034: {Name:mk0dde6f976baac8ab90670ad27c806ab702c4c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:17.815383   57716 start.go:364] duration metric: took 26.643µs to acquireMachinesLock for "functional-644034"
	I1210 05:54:17.815394   57716 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:54:17.815398   57716 fix.go:54] fixHost starting: 
	I1210 05:54:17.815657   57716 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:54:17.832534   57716 fix.go:112] recreateIfNeeded on functional-644034: state=Running err=<nil>
	W1210 05:54:17.832556   57716 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:54:17.836244   57716 out.go:252] * Updating the running docker "functional-644034" container ...
	I1210 05:54:17.836271   57716 machine.go:94] provisionDockerMachine start ...
	I1210 05:54:17.836346   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:17.858100   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:17.858407   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:17.858412   57716 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:54:17.974240   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:18.011085   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:54:18.011101   57716 ubuntu.go:182] provisioning hostname "functional-644034"
	I1210 05:54:18.011170   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.035073   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:18.035392   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:18.035402   57716 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-644034 && echo "functional-644034" | sudo tee /etc/hostname
	I1210 05:54:18.133146   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:18.205140   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:54:18.205224   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.223112   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:18.223456   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:18.223470   57716 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-644034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644034/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-644034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:54:18.298229   57716 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298265   57716 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298312   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:54:18.298319   57716 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.857µs
	I1210 05:54:18.298326   57716 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:54:18.298329   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 05:54:18.298336   57716 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298351   57716 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 82.455µs
	I1210 05:54:18.298357   57716 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298363   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:54:18.298368   57716 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.182µs
	I1210 05:54:18.298372   57716 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:54:18.298368   57716 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298381   57716 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298411   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 05:54:18.298406   57716 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298417   57716 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 37.08µs
	I1210 05:54:18.298422   57716 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 05:54:18.298434   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 05:54:18.298430   57716 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298438   57716 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 33.1µs
	I1210 05:54:18.298443   57716 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 05:54:18.298232   57716 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298464   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 05:54:18.298468   57716 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 256.891µs
	I1210 05:54:18.298472   57716 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298474   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 05:54:18.298480   57716 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 50.314µs
	I1210 05:54:18.298482   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 05:54:18.298484   57716 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298489   57716 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 122.242µs
	I1210 05:54:18.298496   57716 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298511   57716 cache.go:87] Successfully saved all images to host disk.
	I1210 05:54:18.371362   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:54:18.371378   57716 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 05:54:18.371397   57716 ubuntu.go:190] setting up certificates
	I1210 05:54:18.371416   57716 provision.go:84] configureAuth start
	I1210 05:54:18.371483   57716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:54:18.389550   57716 provision.go:143] copyHostCerts
	I1210 05:54:18.389620   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 05:54:18.389627   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:54:18.389704   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 05:54:18.389803   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 05:54:18.389808   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:54:18.389833   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 05:54:18.389882   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 05:54:18.389885   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:54:18.389906   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 05:54:18.389948   57716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.functional-644034 san=[127.0.0.1 192.168.49.2 functional-644034 localhost minikube]
	I1210 05:54:18.683488   57716 provision.go:177] copyRemoteCerts
	I1210 05:54:18.683553   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:54:18.683598   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.701578   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:18.806523   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:54:18.823889   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:54:18.841176   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:54:18.858693   57716 provision.go:87] duration metric: took 487.253139ms to configureAuth
	I1210 05:54:18.858709   57716 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:54:18.858903   57716 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:54:18.858907   57716 machine.go:97] duration metric: took 1.02263281s to provisionDockerMachine
	I1210 05:54:18.858914   57716 start.go:293] postStartSetup for "functional-644034" (driver="docker")
	I1210 05:54:18.858924   57716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:54:18.858977   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:54:18.859033   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.876377   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:18.982817   57716 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:54:18.986081   57716 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:54:18.986098   57716 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:54:18.986108   57716 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 05:54:18.986162   57716 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 05:54:18.986244   57716 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 05:54:18.986314   57716 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> hosts in /etc/test/nested/copy/4116
	I1210 05:54:18.986361   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4116
	I1210 05:54:18.994265   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:54:19.014263   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts --> /etc/test/nested/copy/4116/hosts (40 bytes)
	I1210 05:54:19.031905   57716 start.go:296] duration metric: took 172.976805ms for postStartSetup
	I1210 05:54:19.031977   57716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:54:19.032030   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.049399   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.152285   57716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:54:19.157124   57716 fix.go:56] duration metric: took 1.341718894s for fixHost
	I1210 05:54:19.157140   57716 start.go:83] releasing machines lock for "functional-644034", held for 1.341749918s
	I1210 05:54:19.157254   57716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:54:19.178380   57716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:54:19.178438   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.178590   57716 ssh_runner.go:195] Run: cat /version.json
	I1210 05:54:19.178645   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.200917   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.208552   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.319193   57716 ssh_runner.go:195] Run: systemctl --version
	I1210 05:54:19.412255   57716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:54:19.416947   57716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:54:19.417021   57716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:54:19.424890   57716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:54:19.424903   57716 start.go:496] detecting cgroup driver to use...
	I1210 05:54:19.424932   57716 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:54:19.425004   57716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 05:54:19.440745   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:54:19.453977   57716 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:54:19.454039   57716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:54:19.469832   57716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:54:19.482994   57716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:54:19.599891   57716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:54:19.715074   57716 docker.go:234] disabling docker service ...
	I1210 05:54:19.715128   57716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:54:19.730660   57716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:54:19.743680   57716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:54:19.856717   57716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:54:20.006361   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:54:20.021419   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:54:20.038786   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.191836   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:54:20.201486   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 05:54:20.210685   57716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:54:20.210748   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:54:20.219896   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:54:20.228857   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:54:20.237489   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:54:20.246148   57716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:54:20.253998   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:54:20.262613   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:54:20.271236   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:54:20.280061   57716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:54:20.287623   57716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:54:20.295156   57716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:54:20.415485   57716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:54:20.529881   57716 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 05:54:20.529941   57716 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 05:54:20.533915   57716 start.go:564] Will wait 60s for crictl version
	I1210 05:54:20.533980   57716 ssh_runner.go:195] Run: which crictl
	I1210 05:54:20.537488   57716 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:54:20.562843   57716 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 05:54:20.562909   57716 ssh_runner.go:195] Run: containerd --version
	I1210 05:54:20.586515   57716 ssh_runner.go:195] Run: containerd --version
	I1210 05:54:20.613476   57716 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 05:54:20.616435   57716 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:54:20.632538   57716 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:54:20.639504   57716 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 05:54:20.642345   57716 kubeadm.go:884] updating cluster {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:54:20.642611   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.817647   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.968512   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:21.117681   57716 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:54:21.117754   57716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:54:21.141602   57716 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 05:54:21.141614   57716 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:54:21.141620   57716 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1210 05:54:21.141710   57716 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:54:21.141768   57716 ssh_runner.go:195] Run: sudo crictl info
	I1210 05:54:21.167304   57716 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 05:54:21.167327   57716 cni.go:84] Creating CNI manager for ""
	I1210 05:54:21.167335   57716 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:54:21.167343   57716 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:54:21.167363   57716 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644034 NodeName:functional-644034 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:54:21.167468   57716 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-644034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:54:21.167528   57716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:54:21.175157   57716 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:54:21.175220   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:54:21.182336   57716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 05:54:21.194714   57716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:54:21.206951   57716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1210 05:54:21.218855   57716 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:54:21.222543   57716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:54:21.341027   57716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:54:21.356762   57716 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034 for IP: 192.168.49.2
	I1210 05:54:21.356773   57716 certs.go:195] generating shared ca certs ...
	I1210 05:54:21.356789   57716 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:54:21.356923   57716 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 05:54:21.356964   57716 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 05:54:21.356970   57716 certs.go:257] generating profile certs ...
	I1210 05:54:21.357053   57716 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key
	I1210 05:54:21.357114   57716 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c
	I1210 05:54:21.357152   57716 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key
	I1210 05:54:21.357258   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 05:54:21.357288   57716 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 05:54:21.357307   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:54:21.357333   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:54:21.357354   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:54:21.357375   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 05:54:21.357423   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:54:21.357978   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:54:21.378744   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:54:21.397697   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:54:21.419957   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:54:21.438314   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:54:21.455834   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:54:21.473865   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:54:21.494612   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:54:21.512109   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:54:21.529720   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 05:54:21.547670   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 05:54:21.568707   57716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:54:21.582063   57716 ssh_runner.go:195] Run: openssl version
	I1210 05:54:21.588394   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.595862   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:54:21.603363   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.607193   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.607247   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.648234   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:54:21.655574   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.662804   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 05:54:21.670452   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.674182   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.674235   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.715273   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:54:21.722425   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.729498   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 05:54:21.736743   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.740323   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.740376   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.780972   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:54:21.788152   57716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:54:21.791770   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:54:21.832469   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:54:21.875333   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:54:21.915959   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:54:21.956552   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:54:21.998157   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:54:22.041430   57716 kubeadm.go:401] StartCluster: {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:22.041511   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 05:54:22.041600   57716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:54:22.071281   57716 cri.go:89] found id: ""
	I1210 05:54:22.071348   57716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:54:22.079286   57716 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:54:22.079296   57716 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:54:22.079350   57716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:54:22.086777   57716 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.087401   57716 kubeconfig.go:125] found "functional-644034" server: "https://192.168.49.2:8441"
	I1210 05:54:22.088728   57716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:54:22.096851   57716 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 05:39:45.645176984 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 05:54:21.211483495 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 05:54:22.096860   57716 kubeadm.go:1161] stopping kube-system containers ...
	I1210 05:54:22.096878   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1210 05:54:22.096937   57716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:54:22.122240   57716 cri.go:89] found id: ""
	I1210 05:54:22.122301   57716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 05:54:22.139987   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:54:22.147655   57716 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 05:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 10 05:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 05:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 05:43 /etc/kubernetes/scheduler.conf
	
	I1210 05:54:22.147725   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:54:22.155240   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:54:22.163328   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.163381   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:54:22.170477   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:54:22.178188   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.178242   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:54:22.185324   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:54:22.192557   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.192613   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:54:22.199756   57716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:54:22.207462   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:22.254516   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:23.834868   57716 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.580327189s)
	I1210 05:54:23.834928   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.033268   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.102476   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.150822   57716 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:54:24.150892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:24.651134   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:25.151026   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:25.651869   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:26.151216   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:26.651981   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:27.151958   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:27.651059   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:28.151711   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:28.651801   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:29.151170   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:29.651851   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:30.151157   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:30.651654   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:31.151084   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:31.651758   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:32.151508   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:32.651099   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:33.151680   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:33.651643   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:34.151101   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:34.651107   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:35.150988   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:35.651892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:36.151153   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:36.651103   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:37.151414   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:37.651563   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:38.151178   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:38.651401   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:39.150956   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:39.650979   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:40.151904   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:40.651104   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:41.151273   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:41.651040   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:42.151823   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:42.651144   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:43.151448   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:43.651999   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:44.151103   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:44.651111   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:45.151308   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:45.651953   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:46.151727   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:46.651656   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:47.151732   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:47.651342   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:48.151209   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:48.651132   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:49.151140   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:49.651706   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:50.151487   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:50.651289   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:51.150961   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:51.651096   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:52.150968   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:52.651629   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:53.151897   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:53.651111   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:54.151375   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:54.651108   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:55.151036   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:55.651733   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:56.151260   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:56.651152   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:57.150960   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:57.651169   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:58.151105   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:58.651487   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:59.151042   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:59.651058   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:00.151456   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:00.650980   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:01.151155   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:01.651260   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:02.151783   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:02.651522   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:03.151955   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:03.651242   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:04.151318   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:04.651176   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:05.151161   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:05.651848   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:06.151100   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:06.651828   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:07.151113   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:07.651938   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:08.151467   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:08.651101   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:09.151624   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:09.651209   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:10.151745   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:10.651031   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:11.151720   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:11.651857   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:12.151769   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:12.651470   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:13.151212   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:13.651104   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:14.151106   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:14.651144   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:15.151130   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:15.652008   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:16.151440   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:16.651880   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:17.151343   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:17.651404   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:18.150959   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:18.651272   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:19.151991   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:19.651605   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:20.151125   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:20.651248   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:21.151762   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:21.651604   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:22.151314   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:22.651440   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:23.151928   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:23.651890   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:24.151853   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:24.151952   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:24.176715   57716 cri.go:89] found id: ""
	I1210 05:55:24.176729   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.176736   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:24.176741   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:24.176801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:24.199798   57716 cri.go:89] found id: ""
	I1210 05:55:24.199811   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.199819   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:24.199824   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:24.199881   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:24.223446   57716 cri.go:89] found id: ""
	I1210 05:55:24.223459   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.223466   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:24.223471   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:24.223533   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:24.247963   57716 cri.go:89] found id: ""
	I1210 05:55:24.247976   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.247984   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:24.247989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:24.248052   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:24.271064   57716 cri.go:89] found id: ""
	I1210 05:55:24.271078   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.271085   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:24.271090   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:24.271156   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:24.295582   57716 cri.go:89] found id: ""
	I1210 05:55:24.295595   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.295603   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:24.295608   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:24.295665   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:24.319439   57716 cri.go:89] found id: ""
	I1210 05:55:24.319459   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.319466   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:24.319474   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:24.319484   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:24.374536   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:24.374555   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:24.385677   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:24.385693   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:24.468968   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:24.460916   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.461690   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463385   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463760   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.465289   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:24.460916   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.461690   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463385   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463760   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.465289   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:24.468989   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:24.469008   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:24.534097   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:24.534114   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:27.065851   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:27.076794   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:27.076855   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:27.102051   57716 cri.go:89] found id: ""
	I1210 05:55:27.102064   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.102072   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:27.102087   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:27.102159   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:27.125833   57716 cri.go:89] found id: ""
	I1210 05:55:27.125846   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.125853   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:27.125858   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:27.125916   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:27.150782   57716 cri.go:89] found id: ""
	I1210 05:55:27.150795   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.150803   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:27.150808   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:27.150870   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:27.177446   57716 cri.go:89] found id: ""
	I1210 05:55:27.177459   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.177467   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:27.177472   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:27.177530   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:27.202542   57716 cri.go:89] found id: ""
	I1210 05:55:27.202557   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.202564   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:27.202570   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:27.202631   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:27.229302   57716 cri.go:89] found id: ""
	I1210 05:55:27.229316   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.229323   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:27.229328   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:27.229389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:27.258140   57716 cri.go:89] found id: ""
	I1210 05:55:27.258154   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.258162   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:27.258170   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:27.258179   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:27.313276   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:27.313296   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:27.324237   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:27.324252   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:27.386291   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:27.378930   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.379718   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381201   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381605   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.383124   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:27.378930   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.379718   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381201   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381605   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.383124   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:27.386311   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:27.386321   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:27.451779   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:27.451797   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:29.984865   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:29.994990   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:29.995106   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:30.034785   57716 cri.go:89] found id: ""
	I1210 05:55:30.034800   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.034808   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:30.034815   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:30.034899   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:30.063792   57716 cri.go:89] found id: ""
	I1210 05:55:30.063807   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.063816   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:30.063821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:30.063895   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:30.095916   57716 cri.go:89] found id: ""
	I1210 05:55:30.095931   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.095939   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:30.095945   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:30.096020   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:30.123266   57716 cri.go:89] found id: ""
	I1210 05:55:30.123293   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.123300   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:30.123306   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:30.123378   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:30.149145   57716 cri.go:89] found id: ""
	I1210 05:55:30.149159   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.149167   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:30.149173   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:30.149231   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:30.178515   57716 cri.go:89] found id: ""
	I1210 05:55:30.178529   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.178536   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:30.178541   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:30.178601   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:30.202938   57716 cri.go:89] found id: ""
	I1210 05:55:30.202952   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.202959   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:30.202968   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:30.202977   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:30.262024   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:30.262042   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:30.273395   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:30.273411   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:30.339082   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:30.331422   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.332246   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.333884   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.334216   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.335714   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:30.331422   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.332246   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.333884   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.334216   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.335714   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:30.339099   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:30.339111   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:30.401574   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:30.401599   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:32.947286   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:32.957296   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:32.957360   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:32.982165   57716 cri.go:89] found id: ""
	I1210 05:55:32.982179   57716 logs.go:282] 0 containers: []
	W1210 05:55:32.982186   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:32.982191   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:32.982247   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:33.020504   57716 cri.go:89] found id: ""
	I1210 05:55:33.020517   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.020525   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:33.020530   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:33.020590   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:33.045171   57716 cri.go:89] found id: ""
	I1210 05:55:33.045185   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.045193   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:33.045198   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:33.045261   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:33.069898   57716 cri.go:89] found id: ""
	I1210 05:55:33.069923   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.069931   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:33.069936   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:33.070003   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:33.094592   57716 cri.go:89] found id: ""
	I1210 05:55:33.094607   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.094614   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:33.094619   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:33.094687   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:33.119752   57716 cri.go:89] found id: ""
	I1210 05:55:33.119765   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.119772   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:33.119778   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:33.119842   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:33.144728   57716 cri.go:89] found id: ""
	I1210 05:55:33.144742   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.144749   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:33.144757   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:33.144767   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:33.202510   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:33.202527   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:33.213898   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:33.213914   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:33.276996   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:33.269599   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.270004   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.271689   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.272071   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.273649   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:33.269599   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.270004   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.271689   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.272071   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.273649   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:33.277006   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:33.277016   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:33.337654   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:33.337675   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:35.867520   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:35.877494   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:35.877552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:35.903487   57716 cri.go:89] found id: ""
	I1210 05:55:35.903501   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.903508   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:35.903514   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:35.903571   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:35.933040   57716 cri.go:89] found id: ""
	I1210 05:55:35.933054   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.933060   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:35.933066   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:35.933150   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:35.956439   57716 cri.go:89] found id: ""
	I1210 05:55:35.956453   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.956460   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:35.956466   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:35.956522   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:35.983120   57716 cri.go:89] found id: ""
	I1210 05:55:35.983133   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.983140   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:35.983155   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:35.983213   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:36.024072   57716 cri.go:89] found id: ""
	I1210 05:55:36.024085   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.024093   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:36.024098   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:36.024163   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:36.050259   57716 cri.go:89] found id: ""
	I1210 05:55:36.050282   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.050289   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:36.050296   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:36.050375   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:36.079897   57716 cri.go:89] found id: ""
	I1210 05:55:36.079911   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.079918   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:36.079925   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:36.079935   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:36.109390   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:36.109405   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:36.164390   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:36.164407   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:36.175368   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:36.175383   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:36.247833   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:36.240230   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.240985   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.242571   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.243126   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.244643   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:36.240230   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.240985   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.242571   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.243126   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.244643   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:36.247845   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:36.247855   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:38.808939   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:38.819051   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:38.819128   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:38.843620   57716 cri.go:89] found id: ""
	I1210 05:55:38.843643   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.843650   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:38.843656   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:38.843713   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:38.872120   57716 cri.go:89] found id: ""
	I1210 05:55:38.872134   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.872141   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:38.872147   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:38.872204   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:38.896725   57716 cri.go:89] found id: ""
	I1210 05:55:38.896738   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.896746   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:38.896751   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:38.896807   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:38.924643   57716 cri.go:89] found id: ""
	I1210 05:55:38.924657   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.924665   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:38.924670   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:38.924729   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:38.952693   57716 cri.go:89] found id: ""
	I1210 05:55:38.952706   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.952714   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:38.952719   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:38.952774   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:38.976175   57716 cri.go:89] found id: ""
	I1210 05:55:38.976189   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.976196   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:38.976201   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:38.976266   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:39.001657   57716 cri.go:89] found id: ""
	I1210 05:55:39.001671   57716 logs.go:282] 0 containers: []
	W1210 05:55:39.001678   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:39.001686   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:39.001698   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:39.013220   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:39.013240   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:39.084372   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:39.073429   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.077278   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.078119   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.079595   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.080038   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:39.073429   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.077278   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.078119   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.079595   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.080038   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:39.084383   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:39.084393   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:39.145338   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:39.145357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:39.173909   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:39.173925   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:41.731159   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:41.741270   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:41.741329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:41.765933   57716 cri.go:89] found id: ""
	I1210 05:55:41.765946   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.765953   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:41.765958   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:41.766034   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:41.790822   57716 cri.go:89] found id: ""
	I1210 05:55:41.790842   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.790850   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:41.790855   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:41.790924   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:41.817287   57716 cri.go:89] found id: ""
	I1210 05:55:41.817300   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.817312   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:41.817318   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:41.817386   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:41.842964   57716 cri.go:89] found id: ""
	I1210 05:55:41.842978   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.842986   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:41.842991   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:41.843068   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:41.871615   57716 cri.go:89] found id: ""
	I1210 05:55:41.871629   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.871637   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:41.871642   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:41.871699   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:41.896188   57716 cri.go:89] found id: ""
	I1210 05:55:41.896216   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.896223   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:41.896229   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:41.896294   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:41.930282   57716 cri.go:89] found id: ""
	I1210 05:55:41.930296   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.930303   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:41.930311   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:41.930320   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:41.985380   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:41.985397   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:42.004532   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:42.004551   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:42.075101   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:42.065585   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.066618   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.068626   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.069338   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.071222   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:42.065585   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.066618   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.068626   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.069338   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.071222   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:42.075129   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:42.075143   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:42.145894   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:42.145929   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:44.679885   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:44.690876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:44.690937   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:44.720897   57716 cri.go:89] found id: ""
	I1210 05:55:44.720911   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.720918   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:44.720923   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:44.720983   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:44.745408   57716 cri.go:89] found id: ""
	I1210 05:55:44.745421   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.745427   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:44.745432   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:44.745495   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:44.773707   57716 cri.go:89] found id: ""
	I1210 05:55:44.773721   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.773728   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:44.773733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:44.773792   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:44.798508   57716 cri.go:89] found id: ""
	I1210 05:55:44.798522   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.798529   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:44.798535   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:44.798597   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:44.822493   57716 cri.go:89] found id: ""
	I1210 05:55:44.822507   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.822515   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:44.822519   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:44.822578   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:44.847294   57716 cri.go:89] found id: ""
	I1210 05:55:44.847308   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.847316   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:44.847321   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:44.847380   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:44.870447   57716 cri.go:89] found id: ""
	I1210 05:55:44.870460   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.870468   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:44.870475   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:44.870485   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:44.926160   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:44.926177   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:44.937022   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:44.937037   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:45.007191   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:44.990257   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.990902   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992455   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992971   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:45.000009   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:44.990257   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.990902   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992455   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992971   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:45.000009   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:45.007203   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:45.007215   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:45.103439   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:45.103467   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:47.653520   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:47.663666   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:47.663731   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:47.697444   57716 cri.go:89] found id: ""
	I1210 05:55:47.697457   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.697464   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:47.697469   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:47.697529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:47.728308   57716 cri.go:89] found id: ""
	I1210 05:55:47.728322   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.728329   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:47.728334   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:47.728391   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:47.753518   57716 cri.go:89] found id: ""
	I1210 05:55:47.753531   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.753538   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:47.753543   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:47.753600   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:47.777296   57716 cri.go:89] found id: ""
	I1210 05:55:47.777309   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.777316   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:47.777322   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:47.777378   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:47.800977   57716 cri.go:89] found id: ""
	I1210 05:55:47.800998   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.801005   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:47.801010   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:47.801067   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:47.825052   57716 cri.go:89] found id: ""
	I1210 05:55:47.825065   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.825073   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:47.825078   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:47.825147   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:47.848863   57716 cri.go:89] found id: ""
	I1210 05:55:47.848876   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.848883   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:47.848892   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:47.848902   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:47.905124   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:47.905139   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:47.915783   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:47.915800   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:47.980730   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:47.973198   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.973889   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.975569   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.976034   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.977547   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:47.973198   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.973889   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.975569   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.976034   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.977547   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:47.980740   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:47.980750   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:48.042937   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:48.042955   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:50.581353   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:50.591210   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:50.591269   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:50.620774   57716 cri.go:89] found id: ""
	I1210 05:55:50.620788   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.620794   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:50.620800   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:50.620864   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:50.645050   57716 cri.go:89] found id: ""
	I1210 05:55:50.645064   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.645071   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:50.645082   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:50.645146   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:50.679878   57716 cri.go:89] found id: ""
	I1210 05:55:50.679890   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.679897   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:50.679903   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:50.679960   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:50.710005   57716 cri.go:89] found id: ""
	I1210 05:55:50.710018   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.710026   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:50.710032   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:50.710088   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:50.744288   57716 cri.go:89] found id: ""
	I1210 05:55:50.744302   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.744311   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:50.744317   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:50.744373   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:50.767954   57716 cri.go:89] found id: ""
	I1210 05:55:50.767967   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.767974   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:50.767980   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:50.768037   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:50.796157   57716 cri.go:89] found id: ""
	I1210 05:55:50.796171   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.796179   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:50.796186   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:50.796196   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:50.851621   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:50.851638   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:50.863074   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:50.863091   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:50.939619   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:50.930950   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.931767   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.933629   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.934152   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.935732   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:50.930950   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.931767   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.933629   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.934152   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.935732   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:50.939629   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:50.939639   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:51.008577   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:51.008598   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:53.537065   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:53.546821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:53.546878   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:53.571853   57716 cri.go:89] found id: ""
	I1210 05:55:53.571867   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.571874   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:53.571879   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:53.571937   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:53.595941   57716 cri.go:89] found id: ""
	I1210 05:55:53.595955   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.595962   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:53.595967   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:53.596023   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:53.620466   57716 cri.go:89] found id: ""
	I1210 05:55:53.620480   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.620486   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:53.620492   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:53.620546   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:53.643628   57716 cri.go:89] found id: ""
	I1210 05:55:53.643641   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.643649   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:53.643655   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:53.643711   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:53.673517   57716 cri.go:89] found id: ""
	I1210 05:55:53.673532   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.673539   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:53.673545   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:53.673601   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:53.709885   57716 cri.go:89] found id: ""
	I1210 05:55:53.709899   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.709906   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:53.709911   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:53.709974   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:53.739765   57716 cri.go:89] found id: ""
	I1210 05:55:53.739778   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.739785   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:53.739792   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:53.739802   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:53.795061   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:53.795080   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:53.806101   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:53.806117   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:53.872226   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:53.863177   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.863668   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865274   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865802   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.867416   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:53.863177   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.863668   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865274   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865802   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.867416   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:53.872238   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:53.872248   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:53.933601   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:53.933619   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:56.466912   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:56.476796   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:56.476855   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:56.501021   57716 cri.go:89] found id: ""
	I1210 05:55:56.501035   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.501042   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:56.501048   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:56.501109   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:56.524562   57716 cri.go:89] found id: ""
	I1210 05:55:56.524576   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.524583   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:56.524588   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:56.524644   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:56.547648   57716 cri.go:89] found id: ""
	I1210 05:55:56.547662   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.547669   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:56.547674   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:56.547730   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:56.576863   57716 cri.go:89] found id: ""
	I1210 05:55:56.576876   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.576883   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:56.576895   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:56.576956   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:56.600963   57716 cri.go:89] found id: ""
	I1210 05:55:56.600977   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.600984   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:56.600989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:56.601049   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:56.624726   57716 cri.go:89] found id: ""
	I1210 05:55:56.624739   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.624747   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:56.624755   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:56.624816   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:56.657236   57716 cri.go:89] found id: ""
	I1210 05:55:56.657249   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.657261   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:56.657270   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:56.657280   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:56.697559   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:56.697576   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:56.757986   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:56.758004   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:56.769563   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:56.769579   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:56.830223   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:56.822784   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.823676   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825199   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825498   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.826965   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:56.822784   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.823676   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825199   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825498   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.826965   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:56.830233   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:56.830243   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:59.393208   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:59.403384   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:59.403452   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:59.428722   57716 cri.go:89] found id: ""
	I1210 05:55:59.428749   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.428757   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:59.428763   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:59.428833   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:59.453874   57716 cri.go:89] found id: ""
	I1210 05:55:59.453887   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.453895   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:59.453901   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:59.453962   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:59.478240   57716 cri.go:89] found id: ""
	I1210 05:55:59.478253   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.478260   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:59.478271   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:59.478329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:59.502468   57716 cri.go:89] found id: ""
	I1210 05:55:59.502482   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.502489   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:59.502494   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:59.502554   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:59.526784   57716 cri.go:89] found id: ""
	I1210 05:55:59.526797   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.526804   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:59.526809   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:59.526872   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:59.552473   57716 cri.go:89] found id: ""
	I1210 05:55:59.552486   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.552493   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:59.552499   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:59.552552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:59.576249   57716 cri.go:89] found id: ""
	I1210 05:55:59.576262   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.576269   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:59.576276   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:59.576288   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:59.631147   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:59.631169   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:59.642052   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:59.642067   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:59.721714   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:59.711733   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.712627   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714378   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714692   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.716946   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:59.711733   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.712627   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714378   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714692   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.716946   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:59.721733   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:59.721745   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:59.783216   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:59.783235   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:02.312967   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:02.323213   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:02.323279   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:02.347978   57716 cri.go:89] found id: ""
	I1210 05:56:02.347992   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.348011   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:02.348017   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:02.348073   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:02.372899   57716 cri.go:89] found id: ""
	I1210 05:56:02.372912   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.372920   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:02.372926   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:02.372985   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:02.396971   57716 cri.go:89] found id: ""
	I1210 05:56:02.396985   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.396992   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:02.396997   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:02.397057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:02.422416   57716 cri.go:89] found id: ""
	I1210 05:56:02.422430   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.422437   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:02.422443   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:02.422501   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:02.447977   57716 cri.go:89] found id: ""
	I1210 05:56:02.447990   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.448004   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:02.448009   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:02.448066   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:02.471774   57716 cri.go:89] found id: ""
	I1210 05:56:02.471788   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.471795   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:02.471800   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:02.471857   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:02.496057   57716 cri.go:89] found id: ""
	I1210 05:56:02.496072   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.496079   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:02.496088   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:02.496098   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:02.523576   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:02.523592   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:02.579266   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:02.579296   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:02.590792   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:02.590809   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:02.657064   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:02.648570   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.649296   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651116   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651750   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.653344   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:02.648570   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.649296   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651116   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651750   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.653344   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:02.657075   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:02.657085   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:05.229868   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:05.239953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:05.240012   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:05.264605   57716 cri.go:89] found id: ""
	I1210 05:56:05.264618   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.264626   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:05.264631   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:05.264689   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:05.288264   57716 cri.go:89] found id: ""
	I1210 05:56:05.288277   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.288285   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:05.288290   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:05.288354   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:05.313427   57716 cri.go:89] found id: ""
	I1210 05:56:05.313441   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.313448   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:05.313454   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:05.313510   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:05.344659   57716 cri.go:89] found id: ""
	I1210 05:56:05.344673   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.344680   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:05.344686   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:05.344743   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:05.369600   57716 cri.go:89] found id: ""
	I1210 05:56:05.369614   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.369621   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:05.369626   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:05.369683   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:05.397066   57716 cri.go:89] found id: ""
	I1210 05:56:05.397080   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.397088   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:05.397093   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:05.397150   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:05.422728   57716 cri.go:89] found id: ""
	I1210 05:56:05.422744   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.422751   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:05.422759   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:05.422770   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:05.485204   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:05.477114   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.477952   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479558   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479866   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.481321   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:05.477114   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.477952   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479558   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479866   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.481321   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:05.485215   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:05.485227   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:05.547693   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:05.547712   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:05.580471   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:05.580488   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:05.639350   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:05.639369   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:08.151149   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:08.162270   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:08.162351   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:08.189435   57716 cri.go:89] found id: ""
	I1210 05:56:08.189448   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.189455   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:08.189465   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:08.189530   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:08.218992   57716 cri.go:89] found id: ""
	I1210 05:56:08.219006   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.219031   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:08.219042   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:08.219100   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:08.245141   57716 cri.go:89] found id: ""
	I1210 05:56:08.245153   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.245160   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:08.245165   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:08.245221   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:08.273294   57716 cri.go:89] found id: ""
	I1210 05:56:08.273307   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.273314   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:08.273319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:08.273382   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:08.298396   57716 cri.go:89] found id: ""
	I1210 05:56:08.298410   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.298417   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:08.298422   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:08.298482   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:08.322670   57716 cri.go:89] found id: ""
	I1210 05:56:08.322684   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.322691   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:08.322696   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:08.322753   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:08.347986   57716 cri.go:89] found id: ""
	I1210 05:56:08.348000   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.348007   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:08.348015   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:08.348024   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:08.411052   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:08.411070   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:08.438849   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:08.438865   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:08.496560   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:08.496587   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:08.507905   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:08.507921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:08.573377   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:08.565623   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.566145   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.567826   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.568336   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.569867   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:08.565623   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.566145   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.567826   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.568336   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.569867   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:11.073585   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:11.083689   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:11.083757   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:11.108541   57716 cri.go:89] found id: ""
	I1210 05:56:11.108620   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.108628   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:11.108634   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:11.108694   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:11.134331   57716 cri.go:89] found id: ""
	I1210 05:56:11.134346   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.134353   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:11.134358   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:11.134417   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:11.158615   57716 cri.go:89] found id: ""
	I1210 05:56:11.158628   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.158635   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:11.158640   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:11.158698   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:11.183689   57716 cri.go:89] found id: ""
	I1210 05:56:11.183703   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.183710   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:11.183716   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:11.183775   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:11.207798   57716 cri.go:89] found id: ""
	I1210 05:56:11.207812   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.207819   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:11.207825   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:11.207882   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:11.236712   57716 cri.go:89] found id: ""
	I1210 05:56:11.236726   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.236734   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:11.236739   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:11.236801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:11.260759   57716 cri.go:89] found id: ""
	I1210 05:56:11.260773   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.260780   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:11.260788   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:11.260798   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:11.289769   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:11.289786   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:11.354319   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:11.354343   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:11.365879   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:11.365896   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:11.429322   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:11.420840   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.421615   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.423423   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.424052   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.425736   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:11.420840   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.421615   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.423423   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.424052   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.425736   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:11.429334   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:11.429347   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:13.992257   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:14.005684   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:14.005747   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:14.031213   57716 cri.go:89] found id: ""
	I1210 05:56:14.031233   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.031241   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:14.031246   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:14.031308   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:14.055927   57716 cri.go:89] found id: ""
	I1210 05:56:14.055941   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.055948   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:14.055953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:14.056011   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:14.080687   57716 cri.go:89] found id: ""
	I1210 05:56:14.080700   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.080707   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:14.080712   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:14.080770   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:14.108973   57716 cri.go:89] found id: ""
	I1210 05:56:14.108986   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.108993   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:14.108999   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:14.109057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:14.138949   57716 cri.go:89] found id: ""
	I1210 05:56:14.138963   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.138971   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:14.138976   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:14.139058   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:14.162184   57716 cri.go:89] found id: ""
	I1210 05:56:14.162199   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.162206   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:14.162211   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:14.162267   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:14.186846   57716 cri.go:89] found id: ""
	I1210 05:56:14.186859   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.186866   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:14.186874   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:14.186885   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:14.214982   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:14.214998   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:14.272262   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:14.272279   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:14.283290   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:14.283306   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:14.343519   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:14.335616   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.336321   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338030   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338568   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.340121   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:14.335616   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.336321   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338030   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338568   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.340121   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:14.343530   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:14.343541   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:16.905886   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:16.915932   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:16.915991   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:16.943689   57716 cri.go:89] found id: ""
	I1210 05:56:16.943703   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.943710   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:16.943715   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:16.943772   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:16.971692   57716 cri.go:89] found id: ""
	I1210 05:56:16.971705   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.971712   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:16.971717   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:16.971774   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:16.998705   57716 cri.go:89] found id: ""
	I1210 05:56:16.998721   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.998729   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:16.998734   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:16.998805   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:17.028716   57716 cri.go:89] found id: ""
	I1210 05:56:17.028730   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.028737   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:17.028743   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:17.028810   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:17.056330   57716 cri.go:89] found id: ""
	I1210 05:56:17.056344   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.056351   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:17.056355   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:17.056412   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:17.084606   57716 cri.go:89] found id: ""
	I1210 05:56:17.084620   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.084627   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:17.084633   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:17.084690   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:17.108463   57716 cri.go:89] found id: ""
	I1210 05:56:17.108476   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.108484   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:17.108492   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:17.108502   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:17.119206   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:17.119223   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:17.184513   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:17.176815   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.177383   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.178877   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.179482   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.181206   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:17.176815   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.177383   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.178877   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.179482   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.181206   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:17.184523   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:17.184533   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:17.249050   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:17.249068   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:17.277433   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:17.277448   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:19.835189   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:19.845211   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:19.845270   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:19.869437   57716 cri.go:89] found id: ""
	I1210 05:56:19.869451   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.869457   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:19.869463   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:19.869525   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:19.893666   57716 cri.go:89] found id: ""
	I1210 05:56:19.893680   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.893687   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:19.893691   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:19.893746   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:19.925851   57716 cri.go:89] found id: ""
	I1210 05:56:19.925864   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.925871   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:19.925876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:19.925934   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:19.953268   57716 cri.go:89] found id: ""
	I1210 05:56:19.953283   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.953289   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:19.953295   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:19.953352   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:19.980541   57716 cri.go:89] found id: ""
	I1210 05:56:19.980555   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.980562   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:19.980567   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:19.980629   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:20.014350   57716 cri.go:89] found id: ""
	I1210 05:56:20.014365   57716 logs.go:282] 0 containers: []
	W1210 05:56:20.014383   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:20.014389   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:20.014463   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:20.040904   57716 cri.go:89] found id: ""
	I1210 05:56:20.040918   57716 logs.go:282] 0 containers: []
	W1210 05:56:20.040926   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:20.040933   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:20.040943   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:20.097054   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:20.097072   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:20.108443   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:20.108459   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:20.173764   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:20.164932   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166475   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166965   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168506   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168930   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:20.164932   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166475   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166965   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168506   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168930   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:20.173773   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:20.173784   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:20.235116   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:20.235134   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:22.763516   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:22.773433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:22.773490   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:22.797542   57716 cri.go:89] found id: ""
	I1210 05:56:22.797556   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.797562   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:22.797568   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:22.797622   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:22.821893   57716 cri.go:89] found id: ""
	I1210 05:56:22.821907   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.821915   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:22.821920   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:22.821976   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:22.850542   57716 cri.go:89] found id: ""
	I1210 05:56:22.850557   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.850564   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:22.850569   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:22.850627   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:22.875288   57716 cri.go:89] found id: ""
	I1210 05:56:22.875301   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.875314   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:22.875320   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:22.875376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:22.900725   57716 cri.go:89] found id: ""
	I1210 05:56:22.900739   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.900747   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:22.900752   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:22.900808   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:22.931217   57716 cri.go:89] found id: ""
	I1210 05:56:22.931230   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.931237   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:22.931243   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:22.931309   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:22.963506   57716 cri.go:89] found id: ""
	I1210 05:56:22.963519   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.963525   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:22.963533   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:22.963542   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:23.025625   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:23.025643   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:23.036825   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:23.036841   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:23.100693   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:23.092404   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.093143   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.094913   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.095571   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.097307   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:23.092404   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.093143   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.094913   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.095571   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.097307   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:23.100703   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:23.100715   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:23.160995   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:23.161014   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:25.690455   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:25.700306   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:25.700369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:25.725916   57716 cri.go:89] found id: ""
	I1210 05:56:25.725931   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.725942   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:25.725948   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:25.726009   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:25.749914   57716 cri.go:89] found id: ""
	I1210 05:56:25.749927   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.749935   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:25.749939   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:25.749998   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:25.776070   57716 cri.go:89] found id: ""
	I1210 05:56:25.776083   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.776090   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:25.776095   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:25.776154   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:25.799518   57716 cri.go:89] found id: ""
	I1210 05:56:25.799532   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.799540   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:25.799546   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:25.799608   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:25.822990   57716 cri.go:89] found id: ""
	I1210 05:56:25.823057   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.823064   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:25.823072   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:25.823138   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:25.847416   57716 cri.go:89] found id: ""
	I1210 05:56:25.847430   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.847437   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:25.847442   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:25.847500   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:25.871819   57716 cri.go:89] found id: ""
	I1210 05:56:25.871833   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.871840   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:25.871849   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:25.871861   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:25.882590   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:25.882607   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:25.975908   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:25.961777   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.962673   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967132   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967485   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.972482   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:25.961777   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.962673   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967132   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967485   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.972482   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:25.975918   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:25.975929   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:26.042569   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:26.042588   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:26.070803   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:26.070819   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:28.629575   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:28.639457   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:28.639513   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:28.663811   57716 cri.go:89] found id: ""
	I1210 05:56:28.663824   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.663832   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:28.663837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:28.663892   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:28.688455   57716 cri.go:89] found id: ""
	I1210 05:56:28.688469   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.688476   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:28.688481   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:28.688538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:28.711872   57716 cri.go:89] found id: ""
	I1210 05:56:28.711886   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.711893   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:28.711898   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:28.711955   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:28.736153   57716 cri.go:89] found id: ""
	I1210 05:56:28.736166   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.736173   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:28.736181   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:28.736242   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:28.759991   57716 cri.go:89] found id: ""
	I1210 05:56:28.760011   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.760018   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:28.760023   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:28.760080   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:28.784928   57716 cri.go:89] found id: ""
	I1210 05:56:28.784942   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.784949   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:28.784955   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:28.785011   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:28.808330   57716 cri.go:89] found id: ""
	I1210 05:56:28.808343   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.808350   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:28.808359   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:28.808368   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:28.864140   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:28.864158   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:28.874997   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:28.875030   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:28.946271   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:28.938223   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.939058   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.940712   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.941043   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.942516   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:28.938223   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.939058   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.940712   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.941043   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.942516   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:28.946281   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:28.946291   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:29.015729   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:29.015750   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:31.546248   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:31.557000   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:31.557057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:31.581315   57716 cri.go:89] found id: ""
	I1210 05:56:31.581329   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.581336   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:31.581342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:31.581397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:31.606297   57716 cri.go:89] found id: ""
	I1210 05:56:31.606312   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.606327   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:31.606332   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:31.606389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:31.630600   57716 cri.go:89] found id: ""
	I1210 05:56:31.630614   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.630621   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:31.630627   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:31.630684   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:31.658929   57716 cri.go:89] found id: ""
	I1210 05:56:31.658942   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.658949   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:31.658955   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:31.659042   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:31.684421   57716 cri.go:89] found id: ""
	I1210 05:56:31.684434   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.684441   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:31.684456   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:31.684529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:31.708593   57716 cri.go:89] found id: ""
	I1210 05:56:31.708607   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.708614   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:31.708620   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:31.708678   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:31.733389   57716 cri.go:89] found id: ""
	I1210 05:56:31.733403   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.733411   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:31.733419   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:31.733429   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:31.762157   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:31.762171   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:31.818205   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:31.818222   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:31.829166   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:31.829182   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:31.894733   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:31.886837   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.887553   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889191   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889735   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.891344   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:31.886837   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.887553   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889191   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889735   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.891344   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:31.894745   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:31.894756   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:34.466636   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:34.477387   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:34.477462   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:34.508975   57716 cri.go:89] found id: ""
	I1210 05:56:34.508989   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.508996   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:34.509002   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:34.509058   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:34.536397   57716 cri.go:89] found id: ""
	I1210 05:56:34.536410   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.536417   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:34.536424   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:34.536482   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:34.560872   57716 cri.go:89] found id: ""
	I1210 05:56:34.560885   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.560892   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:34.560898   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:34.560959   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:34.585436   57716 cri.go:89] found id: ""
	I1210 05:56:34.585450   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.585457   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:34.585463   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:34.585520   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:34.609983   57716 cri.go:89] found id: ""
	I1210 05:56:34.609997   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.610004   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:34.610010   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:34.610065   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:34.634652   57716 cri.go:89] found id: ""
	I1210 05:56:34.634666   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.634674   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:34.634679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:34.634737   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:34.660417   57716 cri.go:89] found id: ""
	I1210 05:56:34.660431   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.660438   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:34.660446   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:34.660468   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:34.715849   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:34.715870   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:34.726672   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:34.726687   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:34.788897   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:34.781210   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.781759   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783378   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783973   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.785508   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:34.781210   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.781759   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783378   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783973   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.785508   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:34.788907   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:34.788917   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:34.850671   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:34.850690   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:37.378067   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:37.388018   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:37.388079   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:37.415590   57716 cri.go:89] found id: ""
	I1210 05:56:37.415604   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.415611   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:37.415617   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:37.415679   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:37.443166   57716 cri.go:89] found id: ""
	I1210 05:56:37.443179   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.443186   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:37.443192   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:37.443248   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:37.466187   57716 cri.go:89] found id: ""
	I1210 05:56:37.466201   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.466208   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:37.466214   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:37.466271   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:37.492297   57716 cri.go:89] found id: ""
	I1210 05:56:37.492321   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.492329   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:37.492335   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:37.492389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:37.515998   57716 cri.go:89] found id: ""
	I1210 05:56:37.516012   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.516018   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:37.516024   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:37.516083   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:37.540490   57716 cri.go:89] found id: ""
	I1210 05:56:37.540503   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.540510   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:37.540516   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:37.540576   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:37.565092   57716 cri.go:89] found id: ""
	I1210 05:56:37.565105   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.565111   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:37.565119   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:37.565137   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:37.625814   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:37.625837   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:37.637078   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:37.637104   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:37.697146   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:37.689936   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.690349   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691533   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691938   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.693652   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:37.689936   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.690349   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691533   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691938   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.693652   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:37.697156   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:37.697182   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:37.757019   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:37.757038   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:40.287595   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:40.298582   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:40.298641   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:40.322470   57716 cri.go:89] found id: ""
	I1210 05:56:40.322484   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.322491   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:40.322497   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:40.322552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:40.346764   57716 cri.go:89] found id: ""
	I1210 05:56:40.346778   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.346785   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:40.346790   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:40.346851   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:40.373286   57716 cri.go:89] found id: ""
	I1210 05:56:40.373300   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.373307   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:40.373313   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:40.373372   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:40.402348   57716 cri.go:89] found id: ""
	I1210 05:56:40.402361   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.402368   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:40.402373   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:40.402428   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:40.427030   57716 cri.go:89] found id: ""
	I1210 05:56:40.427044   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.427052   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:40.427057   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:40.427117   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:40.451451   57716 cri.go:89] found id: ""
	I1210 05:56:40.451478   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.451485   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:40.451491   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:40.451554   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:40.480083   57716 cri.go:89] found id: ""
	I1210 05:56:40.480100   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.480106   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:40.480114   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:40.480124   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:40.490894   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:40.490909   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:40.556681   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:40.549171   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.549844   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551479   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551814   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.553287   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:40.549171   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.549844   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551479   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551814   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.553287   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:40.556692   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:40.556702   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:40.619424   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:40.619443   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:40.652592   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:40.652608   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:43.210686   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:43.221608   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:43.221673   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:43.249950   57716 cri.go:89] found id: ""
	I1210 05:56:43.249964   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.249971   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:43.249977   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:43.250038   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:43.276671   57716 cri.go:89] found id: ""
	I1210 05:56:43.276685   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.276692   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:43.276697   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:43.276752   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:43.301078   57716 cri.go:89] found id: ""
	I1210 05:56:43.301092   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.301099   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:43.301105   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:43.301166   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:43.325712   57716 cri.go:89] found id: ""
	I1210 05:56:43.325725   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.325732   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:43.325753   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:43.325807   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:43.350013   57716 cri.go:89] found id: ""
	I1210 05:56:43.350027   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.350034   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:43.350039   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:43.350095   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:43.374239   57716 cri.go:89] found id: ""
	I1210 05:56:43.374253   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.374259   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:43.374265   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:43.374325   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:43.398684   57716 cri.go:89] found id: ""
	I1210 05:56:43.398697   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.398704   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:43.398713   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:43.398723   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:43.429674   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:43.429692   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:43.486606   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:43.486624   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:43.497851   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:43.497867   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:43.564988   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:43.556980   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.557595   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559286   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559906   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.561769   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:43.556980   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.557595   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559286   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559906   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.561769   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:43.565001   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:43.565011   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:46.128659   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:46.139799   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:46.139857   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:46.169381   57716 cri.go:89] found id: ""
	I1210 05:56:46.169395   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.169402   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:46.169408   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:46.169468   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:46.198882   57716 cri.go:89] found id: ""
	I1210 05:56:46.198896   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.198903   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:46.198909   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:46.198966   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:46.234049   57716 cri.go:89] found id: ""
	I1210 05:56:46.234064   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.234072   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:46.234077   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:46.234134   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:46.260031   57716 cri.go:89] found id: ""
	I1210 05:56:46.260044   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.260051   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:46.260057   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:46.260112   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:46.284339   57716 cri.go:89] found id: ""
	I1210 05:56:46.284353   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.284361   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:46.284366   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:46.284425   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:46.309943   57716 cri.go:89] found id: ""
	I1210 05:56:46.309957   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.309964   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:46.309970   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:46.310026   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:46.335200   57716 cri.go:89] found id: ""
	I1210 05:56:46.335215   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.335222   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:46.335235   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:46.335247   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:46.391563   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:46.391580   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:46.403485   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:46.403501   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:46.469778   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:46.461822   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.462325   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464066   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464772   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.466293   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:46.461822   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.462325   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464066   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464772   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.466293   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:46.469787   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:46.469798   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:46.533492   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:46.533510   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:49.061494   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:49.071430   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:49.071494   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:49.094941   57716 cri.go:89] found id: ""
	I1210 05:56:49.094961   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.094969   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:49.094974   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:49.095053   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:49.119980   57716 cri.go:89] found id: ""
	I1210 05:56:49.119994   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.120001   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:49.120006   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:49.120061   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:49.149253   57716 cri.go:89] found id: ""
	I1210 05:56:49.149267   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.149275   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:49.149280   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:49.149339   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:49.190394   57716 cri.go:89] found id: ""
	I1210 05:56:49.190407   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.190414   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:49.190419   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:49.190474   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:49.226315   57716 cri.go:89] found id: ""
	I1210 05:56:49.226328   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.226335   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:49.226340   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:49.226398   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:49.253703   57716 cri.go:89] found id: ""
	I1210 05:56:49.253716   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.253723   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:49.253729   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:49.253793   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:49.278595   57716 cri.go:89] found id: ""
	I1210 05:56:49.278609   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.278616   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:49.278633   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:49.278643   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:49.339769   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:49.339786   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:49.368179   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:49.368196   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:49.424135   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:49.424152   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:49.435251   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:49.435277   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:49.499081   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:49.491345   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.492104   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.493573   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.494053   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.495641   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:49.491345   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.492104   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.493573   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.494053   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.495641   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:52.000764   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:52.011936   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:52.011997   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:52.044999   57716 cri.go:89] found id: ""
	I1210 05:56:52.045013   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.045020   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:52.045026   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:52.045084   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:52.069248   57716 cri.go:89] found id: ""
	I1210 05:56:52.069262   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.069269   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:52.069274   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:52.069340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:52.098397   57716 cri.go:89] found id: ""
	I1210 05:56:52.098410   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.098428   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:52.098435   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:52.098500   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:52.126868   57716 cri.go:89] found id: ""
	I1210 05:56:52.126887   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.126905   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:52.126910   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:52.126965   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:52.150645   57716 cri.go:89] found id: ""
	I1210 05:56:52.150658   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.150666   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:52.150681   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:52.150740   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:52.186283   57716 cri.go:89] found id: ""
	I1210 05:56:52.186296   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.186304   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:52.186318   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:52.186374   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:52.218438   57716 cri.go:89] found id: ""
	I1210 05:56:52.218451   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.218458   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:52.218476   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:52.218486   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:52.281011   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:52.273152   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.273845   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.275592   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.276072   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.277623   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:52.273152   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.273845   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.275592   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.276072   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.277623   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:52.281021   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:52.281032   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:52.342042   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:52.342058   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:52.373121   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:52.373136   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:52.428970   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:52.428987   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:54.940399   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:54.950167   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:54.950228   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:54.974172   57716 cri.go:89] found id: ""
	I1210 05:56:54.974186   57716 logs.go:282] 0 containers: []
	W1210 05:56:54.974193   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:54.974199   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:54.974257   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:55.008246   57716 cri.go:89] found id: ""
	I1210 05:56:55.008262   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.008270   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:55.008275   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:55.008340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:55.034655   57716 cri.go:89] found id: ""
	I1210 05:56:55.034669   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.034676   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:55.034682   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:55.034741   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:55.063972   57716 cri.go:89] found id: ""
	I1210 05:56:55.063986   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.063994   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:55.063999   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:55.064057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:55.090263   57716 cri.go:89] found id: ""
	I1210 05:56:55.090275   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.090292   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:55.090298   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:55.090353   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:55.113407   57716 cri.go:89] found id: ""
	I1210 05:56:55.113421   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.113428   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:55.113433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:55.113491   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:55.140991   57716 cri.go:89] found id: ""
	I1210 05:56:55.141010   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.141018   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:55.141025   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:55.141036   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:55.201731   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:55.201749   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:55.218256   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:55.218270   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:55.290800   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:55.282984   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.283573   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285214   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285730   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.287308   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:55.282984   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.283573   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285214   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285730   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.287308   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:55.290811   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:55.290831   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:55.355200   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:55.355218   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:57.881741   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:57.891584   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:57.891646   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:57.918310   57716 cri.go:89] found id: ""
	I1210 05:56:57.918323   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.918330   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:57.918336   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:57.918391   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:57.942318   57716 cri.go:89] found id: ""
	I1210 05:56:57.942331   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.942338   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:57.942344   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:57.942402   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:57.966253   57716 cri.go:89] found id: ""
	I1210 05:56:57.966267   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.966274   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:57.966279   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:57.966338   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:57.990324   57716 cri.go:89] found id: ""
	I1210 05:56:57.990338   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.990346   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:57.990351   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:57.990414   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:58.021444   57716 cri.go:89] found id: ""
	I1210 05:56:58.021458   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.021466   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:58.021471   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:58.021529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:58.046661   57716 cri.go:89] found id: ""
	I1210 05:56:58.046680   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.046688   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:58.046699   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:58.046767   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:58.071123   57716 cri.go:89] found id: ""
	I1210 05:56:58.071137   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.071145   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:58.071153   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:58.071162   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:58.135978   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:58.135998   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:58.167638   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:58.167656   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:58.232589   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:58.232610   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:58.244347   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:58.244363   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:58.304989   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:58.297197   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.297898   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.299609   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.300132   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.301733   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:58.297197   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.297898   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.299609   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.300132   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.301733   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:00.806679   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:00.816733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:00.816793   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:00.845594   57716 cri.go:89] found id: ""
	I1210 05:57:00.845608   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.845615   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:00.845622   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:00.845682   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:00.880377   57716 cri.go:89] found id: ""
	I1210 05:57:00.880391   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.880399   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:00.880405   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:00.880463   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:00.904970   57716 cri.go:89] found id: ""
	I1210 05:57:00.904990   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.904997   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:00.905003   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:00.905063   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:00.933169   57716 cri.go:89] found id: ""
	I1210 05:57:00.933183   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.933191   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:00.933196   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:00.933255   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:00.962218   57716 cri.go:89] found id: ""
	I1210 05:57:00.962231   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.962238   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:00.962244   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:00.962301   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:00.987794   57716 cri.go:89] found id: ""
	I1210 05:57:00.987807   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.987814   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:00.987820   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:00.987879   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:01.014287   57716 cri.go:89] found id: ""
	I1210 05:57:01.014302   57716 logs.go:282] 0 containers: []
	W1210 05:57:01.014309   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:01.014318   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:01.014328   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:01.045925   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:01.045941   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:01.102696   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:01.102714   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:01.114077   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:01.114092   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:01.201703   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:01.177406   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.182687   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.186518   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.195186   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.196003   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:01.177406   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.182687   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.186518   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.195186   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.196003   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:01.201726   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:01.201738   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:03.774227   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:03.784265   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:03.784325   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:03.809259   57716 cri.go:89] found id: ""
	I1210 05:57:03.809273   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.809280   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:03.809285   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:03.809347   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:03.835314   57716 cri.go:89] found id: ""
	I1210 05:57:03.835329   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.835336   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:03.835342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:03.835401   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:03.860149   57716 cri.go:89] found id: ""
	I1210 05:57:03.860163   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.860170   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:03.860175   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:03.860243   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:03.886583   57716 cri.go:89] found id: ""
	I1210 05:57:03.886597   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.886604   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:03.886610   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:03.886669   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:03.915441   57716 cri.go:89] found id: ""
	I1210 05:57:03.915454   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.915462   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:03.915467   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:03.915528   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:03.939994   57716 cri.go:89] found id: ""
	I1210 05:57:03.940008   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.940015   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:03.940021   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:03.944397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:03.970729   57716 cri.go:89] found id: ""
	I1210 05:57:03.970742   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.970749   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:03.970757   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:03.970768   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:04.027596   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:04.027617   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:04.039557   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:04.039578   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:04.105314   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:04.097441   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.098313   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.099991   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.100340   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.101876   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:04.097441   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.098313   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.099991   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.100340   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.101876   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:04.105325   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:04.105336   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:04.167908   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:04.167927   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:06.703048   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:06.712953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:06.713014   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:06.740745   57716 cri.go:89] found id: ""
	I1210 05:57:06.740759   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.740766   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:06.740771   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:06.740826   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:06.764572   57716 cri.go:89] found id: ""
	I1210 05:57:06.764585   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.764592   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:06.764598   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:06.764654   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:06.792403   57716 cri.go:89] found id: ""
	I1210 05:57:06.792418   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.792425   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:06.792430   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:06.792488   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:06.816569   57716 cri.go:89] found id: ""
	I1210 05:57:06.816583   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.816591   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:06.816596   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:06.816659   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:06.841104   57716 cri.go:89] found id: ""
	I1210 05:57:06.841118   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.841125   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:06.841131   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:06.841191   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:06.863923   57716 cri.go:89] found id: ""
	I1210 05:57:06.863936   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.863943   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:06.863949   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:06.864004   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:06.889078   57716 cri.go:89] found id: ""
	I1210 05:57:06.889091   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.889099   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:06.889106   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:06.889116   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:06.943842   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:06.943863   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:06.954461   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:06.954477   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:07.025823   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:07.017208   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.017859   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.019473   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.020044   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.021782   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:07.017208   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.017859   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.019473   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.020044   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.021782   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:07.025833   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:07.025847   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:07.087136   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:07.087156   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:09.618129   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:09.627876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:09.627939   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:09.655385   57716 cri.go:89] found id: ""
	I1210 05:57:09.655399   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.655406   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:09.655411   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:09.655476   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:09.678439   57716 cri.go:89] found id: ""
	I1210 05:57:09.678453   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.678460   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:09.678466   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:09.678521   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:09.708049   57716 cri.go:89] found id: ""
	I1210 05:57:09.708063   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.708071   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:09.708076   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:09.708134   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:09.731272   57716 cri.go:89] found id: ""
	I1210 05:57:09.731286   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.731293   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:09.731298   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:09.731355   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:09.756542   57716 cri.go:89] found id: ""
	I1210 05:57:09.756556   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.756563   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:09.756569   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:09.756625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:09.782376   57716 cri.go:89] found id: ""
	I1210 05:57:09.782389   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.782396   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:09.782402   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:09.782469   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:09.806766   57716 cri.go:89] found id: ""
	I1210 05:57:09.806780   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.806787   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:09.806795   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:09.806806   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:09.817591   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:09.817607   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:09.877883   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:09.869907   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.870472   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872036   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872545   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.874081   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:09.869907   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.870472   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872036   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872545   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.874081   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:09.877897   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:09.877907   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:09.939799   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:09.939817   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:09.972539   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:09.972555   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:12.528080   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:12.538052   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:12.538112   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:12.561407   57716 cri.go:89] found id: ""
	I1210 05:57:12.561421   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.561429   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:12.561434   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:12.561504   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:12.587323   57716 cri.go:89] found id: ""
	I1210 05:57:12.587337   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.587344   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:12.587349   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:12.587407   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:12.611528   57716 cri.go:89] found id: ""
	I1210 05:57:12.611542   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.611550   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:12.611555   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:12.611613   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:12.639252   57716 cri.go:89] found id: ""
	I1210 05:57:12.639266   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.639273   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:12.639278   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:12.639340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:12.662845   57716 cri.go:89] found id: ""
	I1210 05:57:12.662858   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.662865   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:12.662871   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:12.662924   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:12.687312   57716 cri.go:89] found id: ""
	I1210 05:57:12.687325   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.687332   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:12.687338   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:12.687410   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:12.712443   57716 cri.go:89] found id: ""
	I1210 05:57:12.712456   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.712463   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:12.712471   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:12.712484   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:12.772312   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:12.772330   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:12.800589   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:12.800611   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:12.856815   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:12.856832   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:12.868411   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:12.868427   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:12.938613   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:12.928160   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.928723   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.932890   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.933536   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.935292   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:12.928160   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.928723   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.932890   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.933536   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.935292   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:15.439137   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:15.449933   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:15.450005   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:15.483755   57716 cri.go:89] found id: ""
	I1210 05:57:15.483769   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.483775   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:15.483781   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:15.483837   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:15.507520   57716 cri.go:89] found id: ""
	I1210 05:57:15.507534   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.507542   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:15.507547   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:15.507605   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:15.534553   57716 cri.go:89] found id: ""
	I1210 05:57:15.534566   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.534573   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:15.534578   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:15.534635   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:15.559360   57716 cri.go:89] found id: ""
	I1210 05:57:15.559374   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.559381   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:15.559386   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:15.559443   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:15.584591   57716 cri.go:89] found id: ""
	I1210 05:57:15.584607   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.584614   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:15.584619   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:15.584677   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:15.613451   57716 cri.go:89] found id: ""
	I1210 05:57:15.613471   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.613479   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:15.613485   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:15.613607   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:15.638843   57716 cri.go:89] found id: ""
	I1210 05:57:15.638858   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.638865   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:15.638874   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:15.638884   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:15.694185   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:15.694203   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:15.704709   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:15.704725   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:15.769534   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:15.761459   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.762286   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.763956   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.764609   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.766176   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:15.761459   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.762286   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.763956   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.764609   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.766176   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:15.769543   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:15.769556   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:15.830240   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:15.830258   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:18.356935   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:18.366837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:18.366896   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:18.391280   57716 cri.go:89] found id: ""
	I1210 05:57:18.391294   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.391301   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:18.391308   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:18.391376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:18.421532   57716 cri.go:89] found id: ""
	I1210 05:57:18.421546   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.421553   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:18.421558   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:18.421625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:18.455057   57716 cri.go:89] found id: ""
	I1210 05:57:18.455071   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.455078   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:18.455083   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:18.455153   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:18.488121   57716 cri.go:89] found id: ""
	I1210 05:57:18.488135   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.488142   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:18.488148   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:18.488210   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:18.511864   57716 cri.go:89] found id: ""
	I1210 05:57:18.511878   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.511886   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:18.511905   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:18.511966   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:18.535922   57716 cri.go:89] found id: ""
	I1210 05:57:18.535936   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.535957   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:18.535963   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:18.536029   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:18.560287   57716 cri.go:89] found id: ""
	I1210 05:57:18.560302   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.560309   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:18.560317   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:18.560328   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:18.627753   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:18.619860   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.620508   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622357   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622888   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.624346   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:18.619860   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.620508   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622357   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622888   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.624346   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:18.627764   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:18.627776   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:18.688471   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:18.688489   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:18.719143   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:18.719159   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:18.774435   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:18.774453   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:21.285722   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:21.295523   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:21.295582   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:21.322675   57716 cri.go:89] found id: ""
	I1210 05:57:21.322688   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.322696   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:21.322701   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:21.322758   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:21.347136   57716 cri.go:89] found id: ""
	I1210 05:57:21.347150   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.347157   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:21.347162   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:21.347219   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:21.372204   57716 cri.go:89] found id: ""
	I1210 05:57:21.372217   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.372224   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:21.372229   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:21.372283   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:21.395417   57716 cri.go:89] found id: ""
	I1210 05:57:21.395431   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.395438   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:21.395443   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:21.395515   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:21.440154   57716 cri.go:89] found id: ""
	I1210 05:57:21.440167   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.440174   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:21.440179   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:21.440240   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:21.473140   57716 cri.go:89] found id: ""
	I1210 05:57:21.473154   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.473166   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:21.473172   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:21.473227   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:21.501607   57716 cri.go:89] found id: ""
	I1210 05:57:21.501630   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.501638   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:21.501646   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:21.501657   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:21.534381   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:21.534397   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:21.591435   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:21.591454   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:21.602570   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:21.602586   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:21.665543   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:21.656612   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.657173   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659237   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659598   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.661250   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:21.656612   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.657173   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659237   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659598   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.661250   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:21.665553   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:21.665564   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:24.232360   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:24.242545   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:24.242605   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:24.268962   57716 cri.go:89] found id: ""
	I1210 05:57:24.268976   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.268983   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:24.268989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:24.269051   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:24.293625   57716 cri.go:89] found id: ""
	I1210 05:57:24.293638   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.293645   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:24.293650   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:24.293706   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:24.323101   57716 cri.go:89] found id: ""
	I1210 05:57:24.323115   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.323122   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:24.323127   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:24.323184   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:24.352417   57716 cri.go:89] found id: ""
	I1210 05:57:24.352431   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.352442   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:24.352448   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:24.352506   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:24.377825   57716 cri.go:89] found id: ""
	I1210 05:57:24.377839   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.377846   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:24.377851   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:24.377907   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:24.401476   57716 cri.go:89] found id: ""
	I1210 05:57:24.401490   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.401497   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:24.401502   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:24.401560   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:24.430784   57716 cri.go:89] found id: ""
	I1210 05:57:24.430798   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.430805   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:24.430813   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:24.430826   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:24.496086   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:24.496105   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:24.508163   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:24.508178   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:24.572343   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:24.563972   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.564594   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.566433   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.567204   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.568973   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:24.563972   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.564594   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.566433   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.567204   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.568973   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:24.572354   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:24.572365   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:24.634266   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:24.634284   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:27.162032   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:27.171692   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:27.171751   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:27.195293   57716 cri.go:89] found id: ""
	I1210 05:57:27.195306   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.195313   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:27.195319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:27.195375   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:27.223719   57716 cri.go:89] found id: ""
	I1210 05:57:27.223733   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.223741   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:27.223746   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:27.223805   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:27.249635   57716 cri.go:89] found id: ""
	I1210 05:57:27.249648   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.249655   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:27.249661   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:27.249718   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:27.274420   57716 cri.go:89] found id: ""
	I1210 05:57:27.274434   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.274443   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:27.274448   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:27.274515   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:27.302747   57716 cri.go:89] found id: ""
	I1210 05:57:27.302760   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.302777   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:27.302782   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:27.302842   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:27.327624   57716 cri.go:89] found id: ""
	I1210 05:57:27.327638   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.327645   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:27.327650   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:27.327710   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:27.351138   57716 cri.go:89] found id: ""
	I1210 05:57:27.351152   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.351159   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:27.351168   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:27.351179   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:27.416428   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:27.416448   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:27.458729   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:27.458746   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:27.517941   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:27.517959   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:27.528443   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:27.528459   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:27.592381   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:27.584705   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.585249   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.586673   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.587168   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.588572   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:27.584705   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.585249   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.586673   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.587168   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.588572   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:30.094042   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:30.104609   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:30.104685   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:30.131255   57716 cri.go:89] found id: ""
	I1210 05:57:30.131270   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.131277   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:30.131283   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:30.131348   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:30.160477   57716 cri.go:89] found id: ""
	I1210 05:57:30.160491   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.160498   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:30.160503   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:30.160562   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:30.186824   57716 cri.go:89] found id: ""
	I1210 05:57:30.186837   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.186845   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:30.186850   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:30.186910   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:30.212870   57716 cri.go:89] found id: ""
	I1210 05:57:30.212885   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.212892   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:30.212899   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:30.212957   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:30.238085   57716 cri.go:89] found id: ""
	I1210 05:57:30.238098   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.238105   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:30.238111   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:30.238169   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:30.264614   57716 cri.go:89] found id: ""
	I1210 05:57:30.264628   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.264635   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:30.264641   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:30.264697   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:30.292801   57716 cri.go:89] found id: ""
	I1210 05:57:30.292816   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.292823   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:30.292831   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:30.292841   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:30.324527   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:30.324543   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:30.382130   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:30.382156   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:30.392903   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:30.392921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:30.479224   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:30.470442   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.471725   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.473752   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.474178   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.475815   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:30.470442   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.471725   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.473752   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.474178   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.475815   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:30.479235   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:30.479257   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:33.043979   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:33.054086   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:33.054144   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:33.079719   57716 cri.go:89] found id: ""
	I1210 05:57:33.079733   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.079740   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:33.079746   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:33.079804   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:33.109000   57716 cri.go:89] found id: ""
	I1210 05:57:33.109013   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.109020   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:33.109026   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:33.109083   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:33.134184   57716 cri.go:89] found id: ""
	I1210 05:57:33.134198   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.134206   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:33.134213   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:33.134275   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:33.158142   57716 cri.go:89] found id: ""
	I1210 05:57:33.158155   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.158162   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:33.158168   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:33.158253   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:33.181293   57716 cri.go:89] found id: ""
	I1210 05:57:33.181306   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.181313   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:33.181319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:33.181376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:33.206025   57716 cri.go:89] found id: ""
	I1210 05:57:33.206040   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.206047   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:33.206052   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:33.206149   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:33.230253   57716 cri.go:89] found id: ""
	I1210 05:57:33.230267   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.230275   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:33.230283   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:33.230293   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:33.292011   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:33.292028   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:33.318004   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:33.318019   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:33.377256   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:33.377273   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:33.387928   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:33.387943   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:33.461753   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:33.453954   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.454800   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456253   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456768   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.458350   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:33.453954   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.454800   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456253   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456768   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.458350   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:35.962013   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:35.972548   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:35.972622   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:36.000855   57716 cri.go:89] found id: ""
	I1210 05:57:36.000870   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.000880   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:36.000900   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:36.000977   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:36.029136   57716 cri.go:89] found id: ""
	I1210 05:57:36.029151   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.029158   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:36.029164   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:36.029228   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:36.054512   57716 cri.go:89] found id: ""
	I1210 05:57:36.054525   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.054533   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:36.054538   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:36.054597   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:36.080508   57716 cri.go:89] found id: ""
	I1210 05:57:36.080522   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.080529   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:36.080535   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:36.080594   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:36.108590   57716 cri.go:89] found id: ""
	I1210 05:57:36.108604   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.108611   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:36.108616   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:36.108684   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:36.137690   57716 cri.go:89] found id: ""
	I1210 05:57:36.137704   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.137711   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:36.137716   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:36.137777   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:36.164307   57716 cri.go:89] found id: ""
	I1210 05:57:36.164321   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.164328   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:36.164335   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:36.164345   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:36.219816   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:36.219833   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:36.231171   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:36.231187   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:36.294059   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:36.285785   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.286547   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288109   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288462   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.290084   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:36.285785   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.286547   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288109   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288462   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.290084   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:36.294068   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:36.294078   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:36.358593   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:36.358612   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:38.888296   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:38.898447   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:38.898505   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:38.925123   57716 cri.go:89] found id: ""
	I1210 05:57:38.925137   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.925144   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:38.925150   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:38.925210   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:38.949713   57716 cri.go:89] found id: ""
	I1210 05:57:38.949727   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.949734   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:38.949739   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:38.949797   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:38.974867   57716 cri.go:89] found id: ""
	I1210 05:57:38.974881   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.974888   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:38.974893   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:38.974949   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:39.008214   57716 cri.go:89] found id: ""
	I1210 05:57:39.008228   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.008235   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:39.008240   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:39.008300   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:39.033316   57716 cri.go:89] found id: ""
	I1210 05:57:39.033330   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.033342   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:39.033347   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:39.033405   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:39.057634   57716 cri.go:89] found id: ""
	I1210 05:57:39.057648   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.057655   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:39.057660   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:39.057719   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:39.082101   57716 cri.go:89] found id: ""
	I1210 05:57:39.082115   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.082125   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:39.082133   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:39.082143   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:39.144897   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:39.137033   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.137582   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139164   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139565   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.141172   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:39.137033   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.137582   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139164   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139565   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.141172   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:39.144907   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:39.144920   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:39.209520   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:39.209538   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:39.239106   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:39.239121   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:39.294711   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:39.294728   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:41.805411   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:41.814952   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:41.815027   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:41.838919   57716 cri.go:89] found id: ""
	I1210 05:57:41.838933   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.838940   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:41.838946   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:41.839004   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:41.865368   57716 cri.go:89] found id: ""
	I1210 05:57:41.865382   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.865389   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:41.865394   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:41.865452   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:41.889411   57716 cri.go:89] found id: ""
	I1210 05:57:41.889424   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.889431   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:41.889436   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:41.889521   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:41.915079   57716 cri.go:89] found id: ""
	I1210 05:57:41.915093   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.915101   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:41.915110   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:41.915173   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:41.940274   57716 cri.go:89] found id: ""
	I1210 05:57:41.940288   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.940295   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:41.940301   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:41.940360   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:41.969301   57716 cri.go:89] found id: ""
	I1210 05:57:41.969314   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.969321   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:41.969329   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:41.969387   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:41.993086   57716 cri.go:89] found id: ""
	I1210 05:57:41.993100   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.993108   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:41.993116   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:41.993127   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:42.006335   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:42.006357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:42.077276   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:42.067659   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.069125   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.070001   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071203   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071880   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:42.067659   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.069125   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.070001   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071203   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071880   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:42.077290   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:42.077302   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:42.143212   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:42.143248   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:42.179140   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:42.179158   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:44.752413   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:44.762150   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:44.762207   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:44.791897   57716 cri.go:89] found id: ""
	I1210 05:57:44.791911   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.791918   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:44.791924   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:44.791983   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:44.815813   57716 cri.go:89] found id: ""
	I1210 05:57:44.815827   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.815834   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:44.815839   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:44.815894   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:44.839318   57716 cri.go:89] found id: ""
	I1210 05:57:44.839331   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.839337   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:44.839342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:44.839399   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:44.866822   57716 cri.go:89] found id: ""
	I1210 05:57:44.866835   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.866842   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:44.866848   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:44.866904   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:44.892455   57716 cri.go:89] found id: ""
	I1210 05:57:44.892469   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.892476   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:44.892481   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:44.892536   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:44.920574   57716 cri.go:89] found id: ""
	I1210 05:57:44.920588   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.920596   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:44.920602   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:44.920663   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:44.947951   57716 cri.go:89] found id: ""
	I1210 05:57:44.947965   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.947971   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:44.947979   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:44.947988   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:45.005480   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:45.005501   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:45.022560   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:45.022578   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:45.142523   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:45.129527   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.130054   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.132621   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.134289   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.135580   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:45.129527   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.130054   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.132621   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.134289   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.135580   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:45.142534   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:45.142550   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:45.216088   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:45.216135   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:47.759715   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:47.769555   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:47.769615   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:47.793943   57716 cri.go:89] found id: ""
	I1210 05:57:47.793957   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.793964   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:47.793969   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:47.794039   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:47.818334   57716 cri.go:89] found id: ""
	I1210 05:57:47.818348   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.818355   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:47.818360   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:47.818417   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:47.842582   57716 cri.go:89] found id: ""
	I1210 05:57:47.842599   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.842617   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:47.842623   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:47.842689   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:47.868471   57716 cri.go:89] found id: ""
	I1210 05:57:47.868485   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.868492   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:47.868498   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:47.868559   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:47.897381   57716 cri.go:89] found id: ""
	I1210 05:57:47.897394   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.897401   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:47.897416   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:47.897473   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:47.920386   57716 cri.go:89] found id: ""
	I1210 05:57:47.920400   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.920407   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:47.920412   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:47.920474   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:47.947866   57716 cri.go:89] found id: ""
	I1210 05:57:47.947879   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.947886   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:47.947894   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:47.947904   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:48.008844   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:48.008863   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:48.038885   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:48.038903   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:48.095592   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:48.095610   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:48.107140   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:48.107155   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:48.171340   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:48.162734   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.163476   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165210   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165663   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.167242   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:48.162734   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.163476   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165210   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165663   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.167242   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:50.672091   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:50.683391   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:50.683451   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:50.711296   57716 cri.go:89] found id: ""
	I1210 05:57:50.711311   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.711319   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:50.711327   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:50.711382   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:50.740763   57716 cri.go:89] found id: ""
	I1210 05:57:50.740777   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.740785   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:50.740790   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:50.740853   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:50.772079   57716 cri.go:89] found id: ""
	I1210 05:57:50.772093   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.772111   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:50.772117   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:50.772184   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:50.800962   57716 cri.go:89] found id: ""
	I1210 05:57:50.800975   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.800982   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:50.800988   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:50.801044   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:50.825974   57716 cri.go:89] found id: ""
	I1210 05:57:50.825993   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.826000   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:50.826005   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:50.826061   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:50.854343   57716 cri.go:89] found id: ""
	I1210 05:57:50.854356   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.854364   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:50.854369   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:50.854426   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:50.878560   57716 cri.go:89] found id: ""
	I1210 05:57:50.878573   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.878581   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:50.878599   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:50.878609   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:50.906006   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:50.906022   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:50.961851   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:50.961869   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:50.973152   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:50.973171   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:51.044678   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:51.036912   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.037431   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039082   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039573   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.041151   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:51.036912   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.037431   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039082   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039573   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.041151   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:51.044689   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:51.044699   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:53.606481   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:53.616567   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:53.616625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:53.641012   57716 cri.go:89] found id: ""
	I1210 05:57:53.641025   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.641031   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:53.641037   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:53.641092   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:53.673275   57716 cri.go:89] found id: ""
	I1210 05:57:53.673290   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.673307   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:53.673313   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:53.673369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:53.709276   57716 cri.go:89] found id: ""
	I1210 05:57:53.709291   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.709298   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:53.709302   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:53.709369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:53.739332   57716 cri.go:89] found id: ""
	I1210 05:57:53.739346   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.739353   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:53.739358   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:53.739415   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:53.764637   57716 cri.go:89] found id: ""
	I1210 05:57:53.764650   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.764657   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:53.764662   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:53.764717   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:53.793424   57716 cri.go:89] found id: ""
	I1210 05:57:53.793438   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.793446   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:53.793451   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:53.793514   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:53.823828   57716 cri.go:89] found id: ""
	I1210 05:57:53.823842   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.823849   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:53.823857   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:53.823868   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:53.834565   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:53.834583   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:53.898035   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:53.890056   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.890844   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.892495   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.893000   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.894620   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:53.890056   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.890844   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.892495   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.893000   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.894620   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:53.898052   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:53.898063   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:53.960027   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:53.960044   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:53.988584   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:53.988600   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:56.551892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:56.562044   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:56.562109   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:56.587872   57716 cri.go:89] found id: ""
	I1210 05:57:56.587889   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.587897   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:56.587902   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:56.587967   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:56.613907   57716 cri.go:89] found id: ""
	I1210 05:57:56.613920   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.613927   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:56.613932   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:56.613988   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:56.638685   57716 cri.go:89] found id: ""
	I1210 05:57:56.638699   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.638706   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:56.638711   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:56.638768   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:56.665211   57716 cri.go:89] found id: ""
	I1210 05:57:56.665225   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.665232   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:56.665237   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:56.665295   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:56.696149   57716 cri.go:89] found id: ""
	I1210 05:57:56.696163   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.696169   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:56.696174   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:56.696231   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:56.728016   57716 cri.go:89] found id: ""
	I1210 05:57:56.728029   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.728036   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:56.728042   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:56.728104   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:56.752871   57716 cri.go:89] found id: ""
	I1210 05:57:56.752886   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.752894   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:56.752901   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:56.752913   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:56.783267   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:56.783283   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:56.842023   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:56.842046   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:56.853533   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:56.853549   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:56.914976   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:56.907206   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.907987   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909541   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909854   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.911455   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:56.907206   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.907987   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909541   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909854   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.911455   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:56.914988   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:56.915000   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:59.477082   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:59.487185   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:59.487242   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:59.511535   57716 cri.go:89] found id: ""
	I1210 05:57:59.511549   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.511556   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:59.511562   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:59.511639   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:59.536235   57716 cri.go:89] found id: ""
	I1210 05:57:59.536249   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.536265   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:59.536271   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:59.536329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:59.560801   57716 cri.go:89] found id: ""
	I1210 05:57:59.560815   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.560821   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:59.560827   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:59.560890   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:59.586232   57716 cri.go:89] found id: ""
	I1210 05:57:59.586247   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.586273   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:59.586279   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:59.586343   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:59.610087   57716 cri.go:89] found id: ""
	I1210 05:57:59.610101   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.610108   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:59.610113   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:59.610170   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:59.634249   57716 cri.go:89] found id: ""
	I1210 05:57:59.634263   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.634270   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:59.634275   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:59.634333   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:59.659066   57716 cri.go:89] found id: ""
	I1210 05:57:59.659100   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.659106   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:59.659115   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:59.659125   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:59.670606   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:59.670622   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:59.744825   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:59.737528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.737938   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739905   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.741403   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:59.737528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.737938   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739905   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.741403   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:59.744835   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:59.744847   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:59.806075   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:59.806092   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:59.841753   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:59.841769   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:02.400095   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:02.410925   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:02.410999   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:02.435337   57716 cri.go:89] found id: ""
	I1210 05:58:02.435351   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.435358   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:02.435363   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:02.435421   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:02.459273   57716 cri.go:89] found id: ""
	I1210 05:58:02.459287   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.459294   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:02.459299   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:02.459369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:02.484838   57716 cri.go:89] found id: ""
	I1210 05:58:02.484859   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.484867   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:02.484872   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:02.484930   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:02.513703   57716 cri.go:89] found id: ""
	I1210 05:58:02.513718   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.513732   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:02.513738   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:02.513799   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:02.537442   57716 cri.go:89] found id: ""
	I1210 05:58:02.537456   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.537472   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:02.537478   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:02.537538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:02.562811   57716 cri.go:89] found id: ""
	I1210 05:58:02.562824   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.562831   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:02.562837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:02.562904   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:02.593233   57716 cri.go:89] found id: ""
	I1210 05:58:02.593247   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.593254   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:02.593263   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:02.593283   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:02.649484   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:02.649502   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:02.668256   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:02.668270   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:02.746961   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:02.738936   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.739525   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741090   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741618   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.743247   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:02.738936   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.739525   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741090   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741618   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.743247   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:02.746984   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:02.746995   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:02.810434   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:02.810451   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:05.338812   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:05.348929   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:05.349015   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:05.376460   57716 cri.go:89] found id: ""
	I1210 05:58:05.376474   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.376481   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:05.376486   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:05.376545   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:05.401572   57716 cri.go:89] found id: ""
	I1210 05:58:05.401585   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.401593   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:05.401598   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:05.401657   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:05.426804   57716 cri.go:89] found id: ""
	I1210 05:58:05.426820   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.426827   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:05.426832   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:05.426889   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:05.450557   57716 cri.go:89] found id: ""
	I1210 05:58:05.450570   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.450577   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:05.450583   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:05.450640   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:05.476587   57716 cri.go:89] found id: ""
	I1210 05:58:05.476601   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.476607   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:05.476612   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:05.476669   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:05.501716   57716 cri.go:89] found id: ""
	I1210 05:58:05.501730   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.501736   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:05.501742   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:05.501801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:05.526971   57716 cri.go:89] found id: ""
	I1210 05:58:05.526985   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.526992   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:05.527000   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:05.527050   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:05.585508   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:05.585527   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:05.596526   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:05.596542   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:05.661377   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:05.650856   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.651411   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653229   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653833   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.655530   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:05.650856   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.651411   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653229   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653833   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.655530   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:05.661388   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:05.661398   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:05.732863   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:05.732882   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:08.260047   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:08.270586   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:08.270648   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:08.298955   57716 cri.go:89] found id: ""
	I1210 05:58:08.298984   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.298992   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:08.298997   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:08.299088   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:08.326321   57716 cri.go:89] found id: ""
	I1210 05:58:08.326335   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.326342   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:08.326347   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:08.326410   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:08.350063   57716 cri.go:89] found id: ""
	I1210 05:58:08.350077   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.350095   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:08.350100   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:08.350157   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:08.374459   57716 cri.go:89] found id: ""
	I1210 05:58:08.374472   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.374480   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:08.374485   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:08.374549   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:08.398594   57716 cri.go:89] found id: ""
	I1210 05:58:08.398608   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.398615   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:08.398629   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:08.398685   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:08.423334   57716 cri.go:89] found id: ""
	I1210 05:58:08.423348   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.423355   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:08.423366   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:08.423424   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:08.448137   57716 cri.go:89] found id: ""
	I1210 05:58:08.448150   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.448157   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:08.448164   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:08.448175   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:08.510732   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:08.502942   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.503736   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505412   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505743   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.507339   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:08.502942   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.503736   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505412   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505743   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.507339   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:08.510751   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:08.510764   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:08.572194   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:08.572211   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:08.600446   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:08.600463   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:08.657452   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:08.657469   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:11.170762   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:11.180886   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:11.180951   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:11.205555   57716 cri.go:89] found id: ""
	I1210 05:58:11.205569   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.205584   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:11.205590   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:11.205664   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:11.233080   57716 cri.go:89] found id: ""
	I1210 05:58:11.233094   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.233101   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:11.233106   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:11.233164   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:11.257793   57716 cri.go:89] found id: ""
	I1210 05:58:11.257807   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.257814   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:11.257821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:11.257879   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:11.282030   57716 cri.go:89] found id: ""
	I1210 05:58:11.282042   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.282050   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:11.282055   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:11.282119   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:11.305111   57716 cri.go:89] found id: ""
	I1210 05:58:11.305125   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.305132   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:11.305138   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:11.305196   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:11.329236   57716 cri.go:89] found id: ""
	I1210 05:58:11.329250   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.329257   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:11.329264   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:11.329320   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:11.354605   57716 cri.go:89] found id: ""
	I1210 05:58:11.354620   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.354627   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:11.354635   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:11.354645   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:11.386130   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:11.386146   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:11.444254   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:11.444272   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:11.455429   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:11.455446   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:11.522092   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:11.513675   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.514592   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516162   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516767   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.518501   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:11.513675   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.514592   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516162   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516767   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.518501   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:11.522102   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:11.522112   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:14.084603   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:14.094719   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:14.094779   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:14.118507   57716 cri.go:89] found id: ""
	I1210 05:58:14.118520   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.118528   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:14.118533   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:14.118588   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:14.144079   57716 cri.go:89] found id: ""
	I1210 05:58:14.144093   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.144100   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:14.144105   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:14.144166   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:14.174736   57716 cri.go:89] found id: ""
	I1210 05:58:14.174750   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.174757   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:14.174762   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:14.174837   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:14.199688   57716 cri.go:89] found id: ""
	I1210 05:58:14.199709   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.199727   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:14.199733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:14.199789   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:14.227765   57716 cri.go:89] found id: ""
	I1210 05:58:14.227779   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.227786   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:14.227793   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:14.227853   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:14.256531   57716 cri.go:89] found id: ""
	I1210 05:58:14.256546   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.256554   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:14.256559   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:14.256628   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:14.281035   57716 cri.go:89] found id: ""
	I1210 05:58:14.281054   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.281062   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:14.281070   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:14.281082   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:14.307632   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:14.307647   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:14.363636   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:14.363655   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:14.374356   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:14.374372   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:14.439204   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:14.431102   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.431970   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.433831   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.434179   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.435681   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:14.431102   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.431970   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.433831   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.434179   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.435681   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:14.439214   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:14.439227   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:17.000609   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:17.011094   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:17.011152   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:17.034914   57716 cri.go:89] found id: ""
	I1210 05:58:17.034928   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.034935   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:17.034940   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:17.034997   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:17.059216   57716 cri.go:89] found id: ""
	I1210 05:58:17.059229   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.059236   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:17.059241   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:17.059297   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:17.084654   57716 cri.go:89] found id: ""
	I1210 05:58:17.084667   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.084674   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:17.084679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:17.084734   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:17.108452   57716 cri.go:89] found id: ""
	I1210 05:58:17.108465   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.108472   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:17.108477   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:17.108538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:17.131638   57716 cri.go:89] found id: ""
	I1210 05:58:17.131652   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.131660   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:17.131666   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:17.131724   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:17.157073   57716 cri.go:89] found id: ""
	I1210 05:58:17.157086   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.157093   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:17.157099   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:17.157155   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:17.181834   57716 cri.go:89] found id: ""
	I1210 05:58:17.181849   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.181856   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:17.181864   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:17.181874   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:17.237484   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:17.237500   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:17.248803   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:17.248818   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:17.312123   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:17.304256   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.304922   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.306601   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.307186   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.308722   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:17.304256   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.304922   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.306601   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.307186   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.308722   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:17.312135   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:17.312145   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:17.375552   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:17.375570   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:19.903470   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:19.915506   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:19.915564   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:19.947745   57716 cri.go:89] found id: ""
	I1210 05:58:19.947758   57716 logs.go:282] 0 containers: []
	W1210 05:58:19.947765   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:19.947771   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:19.947832   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:19.980662   57716 cri.go:89] found id: ""
	I1210 05:58:19.980676   57716 logs.go:282] 0 containers: []
	W1210 05:58:19.980683   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:19.980688   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:19.980746   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:20.014764   57716 cri.go:89] found id: ""
	I1210 05:58:20.014787   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.014795   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:20.014801   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:20.014868   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:20.043079   57716 cri.go:89] found id: ""
	I1210 05:58:20.043093   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.043100   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:20.043106   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:20.043168   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:20.071694   57716 cri.go:89] found id: ""
	I1210 05:58:20.071709   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.071717   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:20.071722   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:20.071785   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:20.097931   57716 cri.go:89] found id: ""
	I1210 05:58:20.097945   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.097952   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:20.097958   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:20.098028   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:20.122795   57716 cri.go:89] found id: ""
	I1210 05:58:20.122809   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.122816   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:20.122824   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:20.122835   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:20.133825   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:20.133840   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:20.194901   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:20.186552   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.187444   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189151   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189874   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.191519   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:20.186552   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.187444   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189151   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189874   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.191519   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:20.194911   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:20.194921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:20.256875   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:20.256894   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:20.283841   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:20.283857   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:22.843646   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:22.853725   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:22.853782   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:22.878310   57716 cri.go:89] found id: ""
	I1210 05:58:22.878325   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.878332   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:22.878336   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:22.878393   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:22.902470   57716 cri.go:89] found id: ""
	I1210 05:58:22.902483   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.902490   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:22.902495   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:22.902552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:22.929428   57716 cri.go:89] found id: ""
	I1210 05:58:22.929442   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.929449   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:22.929454   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:22.929512   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:22.962201   57716 cri.go:89] found id: ""
	I1210 05:58:22.962215   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.962222   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:22.962227   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:22.962286   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:22.988315   57716 cri.go:89] found id: ""
	I1210 05:58:22.988329   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.988336   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:22.988341   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:22.988397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:23.015788   57716 cri.go:89] found id: ""
	I1210 05:58:23.015801   57716 logs.go:282] 0 containers: []
	W1210 05:58:23.015818   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:23.015824   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:23.015895   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:23.040476   57716 cri.go:89] found id: ""
	I1210 05:58:23.040490   57716 logs.go:282] 0 containers: []
	W1210 05:58:23.040497   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:23.040505   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:23.040515   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:23.097263   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:23.097281   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:23.108339   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:23.108357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:23.174372   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:23.166022   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.166801   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168382   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168890   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.170644   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:23.166022   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.166801   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168382   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168890   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.170644   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:23.174382   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:23.174393   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:23.238417   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:23.238433   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:25.767502   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:25.777560   57716 kubeadm.go:602] duration metric: took 4m3.698254406s to restartPrimaryControlPlane
	W1210 05:58:25.777622   57716 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 05:58:25.777697   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 05:58:26.181572   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:58:26.194845   57716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:58:26.202430   57716 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:58:26.202489   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:58:26.210414   57716 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:58:26.210423   57716 kubeadm.go:158] found existing configuration files:
	
	I1210 05:58:26.210474   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:58:26.218226   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:58:26.218281   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:58:26.225499   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:58:26.233426   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:58:26.233479   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:58:26.240639   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:58:26.247882   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:58:26.247936   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:58:26.255235   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:58:26.263002   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:58:26.263069   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:58:26.270271   57716 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:58:26.308640   57716 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 05:58:26.308937   57716 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:58:26.373888   57716 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:58:26.373948   57716 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 05:58:26.373980   57716 kubeadm.go:319] OS: Linux
	I1210 05:58:26.374022   57716 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:58:26.374069   57716 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 05:58:26.374113   57716 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:58:26.374157   57716 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:58:26.374200   57716 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:58:26.374244   57716 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:58:26.374300   57716 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:58:26.374343   57716 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:58:26.374385   57716 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 05:58:26.445771   57716 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:58:26.445880   57716 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:58:26.445970   57716 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:58:26.455518   57716 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:58:26.460828   57716 out.go:252]   - Generating certificates and keys ...
	I1210 05:58:26.460930   57716 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:58:26.461006   57716 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:58:26.461110   57716 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 05:58:26.461178   57716 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 05:58:26.461260   57716 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 05:58:26.461325   57716 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 05:58:26.461413   57716 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 05:58:26.461483   57716 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 05:58:26.461565   57716 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 05:58:26.461644   57716 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 05:58:26.461682   57716 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 05:58:26.461743   57716 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:58:26.520044   57716 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:58:27.005643   57716 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:58:27.519831   57716 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:58:27.780223   57716 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:58:28.060883   57716 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:58:28.061559   57716 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:58:28.064834   57716 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:58:28.067981   57716 out.go:252]   - Booting up control plane ...
	I1210 05:58:28.068070   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:58:28.068143   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:58:28.069383   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:58:28.090093   57716 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:58:28.090188   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:58:28.097949   57716 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:58:28.098042   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:58:28.098080   57716 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:58:28.241595   57716 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:58:28.241705   57716 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:02:28.236858   57716 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00011534s
	I1210 06:02:28.236887   57716 kubeadm.go:319] 
	I1210 06:02:28.236942   57716 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:02:28.236986   57716 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:02:28.237128   57716 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:02:28.237135   57716 kubeadm.go:319] 
	I1210 06:02:28.237233   57716 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:02:28.237262   57716 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:02:28.237291   57716 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:02:28.237295   57716 kubeadm.go:319] 
	I1210 06:02:28.241711   57716 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:02:28.242149   57716 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:02:28.242254   57716 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:02:28.242529   57716 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:02:28.242535   57716 kubeadm.go:319] 
	I1210 06:02:28.242598   57716 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:02:28.242730   57716 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00011534s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:02:28.242815   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:02:28.653276   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:02:28.666846   57716 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:02:28.666902   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:02:28.676196   57716 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:02:28.676206   57716 kubeadm.go:158] found existing configuration files:
	
	I1210 06:02:28.676262   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:02:28.683929   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:02:28.683984   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:02:28.691531   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:02:28.699193   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:02:28.699247   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:02:28.706499   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:02:28.713695   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:02:28.713761   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:02:28.721311   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:02:28.729191   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:02:28.729245   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:02:28.737059   57716 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:02:28.777392   57716 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:02:28.777754   57716 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:02:28.849302   57716 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:02:28.849368   57716 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:02:28.849403   57716 kubeadm.go:319] OS: Linux
	I1210 06:02:28.849460   57716 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:02:28.849508   57716 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:02:28.849555   57716 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:02:28.849602   57716 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:02:28.849649   57716 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:02:28.849696   57716 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:02:28.849745   57716 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:02:28.849792   57716 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:02:28.849837   57716 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:02:28.921564   57716 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:02:28.921662   57716 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:02:28.921748   57716 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:02:28.926509   57716 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:02:28.929904   57716 out.go:252]   - Generating certificates and keys ...
	I1210 06:02:28.929994   57716 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:02:28.930057   57716 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:02:28.930131   57716 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:02:28.930201   57716 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:02:28.930270   57716 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:02:28.930322   57716 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:02:28.930384   57716 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:02:28.930444   57716 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:02:28.930517   57716 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:02:28.930589   57716 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:02:28.930766   57716 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:02:28.930854   57716 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:02:29.206630   57716 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:02:29.720612   57716 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:02:29.887413   57716 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:02:30.011857   57716 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:02:30.197709   57716 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:02:30.198347   57716 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:02:30.201006   57716 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:02:30.204123   57716 out.go:252]   - Booting up control plane ...
	I1210 06:02:30.204220   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:02:30.204296   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:02:30.204794   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:02:30.227311   57716 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:02:30.227437   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:02:30.235547   57716 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:02:30.235634   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:02:30.235945   57716 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:02:30.373162   57716 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:02:30.373269   57716 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:06:30.371537   57716 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000138118s
	I1210 06:06:30.371561   57716 kubeadm.go:319] 
	I1210 06:06:30.371641   57716 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:06:30.371685   57716 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:06:30.371790   57716 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:06:30.371795   57716 kubeadm.go:319] 
	I1210 06:06:30.371898   57716 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:06:30.371929   57716 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:06:30.371959   57716 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:06:30.371962   57716 kubeadm.go:319] 
	I1210 06:06:30.376139   57716 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:06:30.376577   57716 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:06:30.376687   57716 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:06:30.376961   57716 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:06:30.376966   57716 kubeadm.go:319] 
	I1210 06:06:30.377035   57716 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:06:30.377094   57716 kubeadm.go:403] duration metric: took 12m8.33567442s to StartCluster
	I1210 06:06:30.377125   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:06:30.377187   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:06:30.401132   57716 cri.go:89] found id: ""
	I1210 06:06:30.401147   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.401154   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:30.401160   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:06:30.401219   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:06:30.437615   57716 cri.go:89] found id: ""
	I1210 06:06:30.437630   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.437637   57716 logs.go:284] No container was found matching "etcd"
	I1210 06:06:30.437642   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:06:30.437699   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:06:30.462667   57716 cri.go:89] found id: ""
	I1210 06:06:30.462681   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.462688   57716 logs.go:284] No container was found matching "coredns"
	I1210 06:06:30.462693   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:06:30.462752   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:06:30.491407   57716 cri.go:89] found id: ""
	I1210 06:06:30.491420   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.491428   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:30.491433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:06:30.491493   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:06:30.516073   57716 cri.go:89] found id: ""
	I1210 06:06:30.516086   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.516092   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:30.516098   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:06:30.516154   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:06:30.540636   57716 cri.go:89] found id: ""
	I1210 06:06:30.540649   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.540656   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:30.540679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:06:30.540736   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:06:30.565548   57716 cri.go:89] found id: ""
	I1210 06:06:30.565570   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.565578   57716 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:30.565586   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:30.565596   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:30.620548   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:30.620565   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:30.631284   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:30.631299   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:30.692450   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:30.684857   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.685223   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.686724   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.687076   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.688628   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:30.684857   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.685223   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.686724   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.687076   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.688628   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:30.692461   57716 logs.go:123] Gathering logs for containerd ...
	I1210 06:06:30.692471   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:06:30.755422   57716 logs.go:123] Gathering logs for container status ...
	I1210 06:06:30.755444   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 06:06:30.784033   57716 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:06:30.784067   57716 out.go:285] * 
	W1210 06:06:30.784157   57716 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:06:30.784176   57716 out.go:285] * 
	W1210 06:06:30.786468   57716 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:06:30.793223   57716 out.go:203] 
	W1210 06:06:30.796021   57716 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:06:30.796079   57716 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:06:30.796099   57716 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:06:30.799180   57716 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477949649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477963918Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477995246Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478012321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478021774Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478031620Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478040424Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478051649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478070291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478098854Z" level=info msg="Connect containerd service"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478383782Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478960226Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.497963642Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498025206Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498057067Z" level=info msg="Start subscribing containerd event"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498101696Z" level=info msg="Start recovering state"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526273092Z" level=info msg="Start event monitor"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526463774Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526536103Z" level=info msg="Start streaming server"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526593630Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526675700Z" level=info msg="runtime interface starting up..."
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526739774Z" level=info msg="starting plugins..."
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526805581Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 05:54:20 functional-644034 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.528842308Z" level=info msg="containerd successfully booted in 0.071400s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:43.976939   23754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:43.977938   23754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:43.979745   23754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:43.980262   23754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:43.981838   23754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 06:08:44 up 51 min,  0 user,  load average: 0.55, 0.30, 0.38
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:08:40 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:41 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 495.
	Dec 10 06:08:41 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:41 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:41 functional-644034 kubelet[23589]: E1210 06:08:41.710599   23589 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:41 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:41 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:42 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 496.
	Dec 10 06:08:42 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:42 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:42 functional-644034 kubelet[23629]: E1210 06:08:42.449963   23629 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:42 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:42 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:43 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 497.
	Dec 10 06:08:43 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:43 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:43 functional-644034 kubelet[23667]: E1210 06:08:43.211913   23667 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:43 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:43 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:43 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 498.
	Dec 10 06:08:43 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:43 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:43 functional-644034 kubelet[23753]: E1210 06:08:43.975072   23753 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:43 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:43 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (361.636544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-644034 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-644034 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (56.120979ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-644034 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-644034 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-644034 describe po hello-node-connect: exit status 1 (60.822068ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-644034 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-644034 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-644034 logs -l app=hello-node-connect: exit status 1 (59.795829ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-644034 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-644034 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-644034 describe svc hello-node-connect: exit status 1 (61.244685ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-644034 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 2 (336.063362ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                     │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	│ cache   │ functional-644034 cache reload                                                                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ ssh     │ functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                     │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │ 10 Dec 25 05:54 UTC │
	│ kubectl │ functional-644034 kubectl -- --context functional-644034 get pods                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	│ start   │ -p functional-644034 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                    │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 05:54 UTC │                     │
	│ cp      │ functional-644034 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                          │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ config  │ functional-644034 config unset cpus                                                                                                                         │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ config  │ functional-644034 config get cpus                                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ config  │ functional-644034 config set cpus 2                                                                                                                         │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ config  │ functional-644034 config get cpus                                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ config  │ functional-644034 config unset cpus                                                                                                                         │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ config  │ functional-644034 config get cpus                                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ ssh     │ functional-644034 ssh -n functional-644034 sudo cat /home/docker/cp-test.txt                                                                                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ ssh     │ functional-644034 ssh echo hello                                                                                                                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ cp      │ functional-644034 cp functional-644034:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm512016206/001/cp-test.txt │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ ssh     │ functional-644034 ssh cat /etc/hostname                                                                                                                     │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ ssh     │ functional-644034 ssh -n functional-644034 sudo cat /home/docker/cp-test.txt                                                                                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ tunnel  │ functional-644034 tunnel --alsologtostderr                                                                                                                  │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ tunnel  │ functional-644034 tunnel --alsologtostderr                                                                                                                  │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │                     │
	│ cp      │ functional-644034 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                   │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ ssh     │ functional-644034 ssh -n functional-644034 sudo cat /tmp/does/not/exist/cp-test.txt                                                                         │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:06 UTC │ 10 Dec 25 06:06 UTC │
	│ addons  │ functional-644034 addons list                                                                                                                               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ addons  │ functional-644034 addons list -o json                                                                                                                       │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:54:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:54:17.426935   57716 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:54:17.427082   57716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:17.427086   57716 out.go:374] Setting ErrFile to fd 2...
	I1210 05:54:17.427090   57716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:17.427361   57716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:54:17.427717   57716 out.go:368] Setting JSON to false
	I1210 05:54:17.428531   57716 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2208,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:54:17.428587   57716 start.go:143] virtualization:  
	I1210 05:54:17.432151   57716 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:54:17.435955   57716 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:54:17.436010   57716 notify.go:221] Checking for updates...
	I1210 05:54:17.441966   57716 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:54:17.444885   57716 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:54:17.447901   57716 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:54:17.450919   57716 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:54:17.453767   57716 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:54:17.457197   57716 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:54:17.457296   57716 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:54:17.484154   57716 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:54:17.484249   57716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:54:17.544910   57716 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 05:54:17.535741476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:54:17.545002   57716 docker.go:319] overlay module found
	I1210 05:54:17.548056   57716 out.go:179] * Using the docker driver based on existing profile
	I1210 05:54:17.550880   57716 start.go:309] selected driver: docker
	I1210 05:54:17.550888   57716 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:17.550973   57716 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:54:17.551147   57716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:54:17.606051   57716 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 05:54:17.597194445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:54:17.606475   57716 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:54:17.606497   57716 cni.go:84] Creating CNI manager for ""
	I1210 05:54:17.606551   57716 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:54:17.606592   57716 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:17.611686   57716 out.go:179] * Starting "functional-644034" primary control-plane node in "functional-644034" cluster
	I1210 05:54:17.614501   57716 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 05:54:17.617345   57716 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:54:17.620208   57716 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:54:17.620284   57716 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:54:17.639591   57716 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:54:17.639602   57716 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 05:54:17.674108   57716 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 05:54:17.814864   57716 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 05:54:17.815057   57716 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/config.json ...
	I1210 05:54:17.815157   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:17.815311   57716 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:54:17.815341   57716 start.go:360] acquireMachinesLock for functional-644034: {Name:mk0dde6f976baac8ab90670ad27c806ab702c4c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:17.815383   57716 start.go:364] duration metric: took 26.643µs to acquireMachinesLock for "functional-644034"
	I1210 05:54:17.815394   57716 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:54:17.815398   57716 fix.go:54] fixHost starting: 
	I1210 05:54:17.815657   57716 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
	I1210 05:54:17.832534   57716 fix.go:112] recreateIfNeeded on functional-644034: state=Running err=<nil>
	W1210 05:54:17.832556   57716 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:54:17.836244   57716 out.go:252] * Updating the running docker "functional-644034" container ...
	I1210 05:54:17.836271   57716 machine.go:94] provisionDockerMachine start ...
	I1210 05:54:17.836346   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:17.858100   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:17.858407   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:17.858412   57716 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:54:17.974240   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:18.011085   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:54:18.011101   57716 ubuntu.go:182] provisioning hostname "functional-644034"
	I1210 05:54:18.011170   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.035073   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:18.035392   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:18.035402   57716 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-644034 && echo "functional-644034" | sudo tee /etc/hostname
	I1210 05:54:18.133146   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:18.205140   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-644034
	
	I1210 05:54:18.205224   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.223112   57716 main.go:143] libmachine: Using SSH client type: native
	I1210 05:54:18.223456   57716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1210 05:54:18.223470   57716 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-644034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644034/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-644034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:54:18.298229   57716 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298265   57716 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298312   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 05:54:18.298319   57716 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.857µs
	I1210 05:54:18.298326   57716 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 05:54:18.298329   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 05:54:18.298336   57716 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298351   57716 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 82.455µs
	I1210 05:54:18.298357   57716 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298363   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 05:54:18.298368   57716 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.182µs
	I1210 05:54:18.298372   57716 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 05:54:18.298368   57716 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298381   57716 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298411   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 05:54:18.298406   57716 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298417   57716 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 37.08µs
	I1210 05:54:18.298422   57716 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 05:54:18.298434   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 05:54:18.298430   57716 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298438   57716 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 33.1µs
	I1210 05:54:18.298443   57716 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 05:54:18.298232   57716 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:54:18.298464   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 05:54:18.298468   57716 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 256.891µs
	I1210 05:54:18.298472   57716 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298474   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 05:54:18.298480   57716 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 50.314µs
	I1210 05:54:18.298482   57716 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 05:54:18.298484   57716 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298489   57716 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 122.242µs
	I1210 05:54:18.298496   57716 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 05:54:18.298511   57716 cache.go:87] Successfully saved all images to host disk.
	I1210 05:54:18.371362   57716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:54:18.371378   57716 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 05:54:18.371397   57716 ubuntu.go:190] setting up certificates
	I1210 05:54:18.371416   57716 provision.go:84] configureAuth start
	I1210 05:54:18.371483   57716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:54:18.389550   57716 provision.go:143] copyHostCerts
	I1210 05:54:18.389620   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 05:54:18.389627   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 05:54:18.389704   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 05:54:18.389803   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 05:54:18.389808   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 05:54:18.389833   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 05:54:18.389882   57716 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 05:54:18.389885   57716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 05:54:18.389906   57716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 05:54:18.389948   57716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.functional-644034 san=[127.0.0.1 192.168.49.2 functional-644034 localhost minikube]
	I1210 05:54:18.683488   57716 provision.go:177] copyRemoteCerts
	I1210 05:54:18.683553   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:54:18.683598   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.701578   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:18.806523   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:54:18.823889   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:54:18.841176   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:54:18.858693   57716 provision.go:87] duration metric: took 487.253139ms to configureAuth
	I1210 05:54:18.858709   57716 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:54:18.858903   57716 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 05:54:18.858907   57716 machine.go:97] duration metric: took 1.02263281s to provisionDockerMachine
	I1210 05:54:18.858914   57716 start.go:293] postStartSetup for "functional-644034" (driver="docker")
	I1210 05:54:18.858924   57716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:54:18.858977   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:54:18.859033   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:18.876377   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:18.982817   57716 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:54:18.986081   57716 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:54:18.986098   57716 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:54:18.986108   57716 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 05:54:18.986162   57716 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 05:54:18.986244   57716 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 05:54:18.986314   57716 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts -> hosts in /etc/test/nested/copy/4116
	I1210 05:54:18.986361   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4116
	I1210 05:54:18.994265   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:54:19.014263   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts --> /etc/test/nested/copy/4116/hosts (40 bytes)
	I1210 05:54:19.031905   57716 start.go:296] duration metric: took 172.976805ms for postStartSetup
	I1210 05:54:19.031977   57716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:54:19.032030   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.049399   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.152285   57716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:54:19.157124   57716 fix.go:56] duration metric: took 1.341718894s for fixHost
	I1210 05:54:19.157140   57716 start.go:83] releasing machines lock for "functional-644034", held for 1.341749918s
	I1210 05:54:19.157254   57716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644034
	I1210 05:54:19.178380   57716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:54:19.178438   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.178590   57716 ssh_runner.go:195] Run: cat /version.json
	I1210 05:54:19.178645   57716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
	I1210 05:54:19.200917   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.208552   57716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
	I1210 05:54:19.319193   57716 ssh_runner.go:195] Run: systemctl --version
	I1210 05:54:19.412255   57716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:54:19.416947   57716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:54:19.417021   57716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:54:19.424890   57716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:54:19.424903   57716 start.go:496] detecting cgroup driver to use...
	I1210 05:54:19.424932   57716 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:54:19.425004   57716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 05:54:19.440745   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:54:19.453977   57716 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:54:19.454039   57716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:54:19.469832   57716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:54:19.482994   57716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:54:19.599891   57716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:54:19.715074   57716 docker.go:234] disabling docker service ...
	I1210 05:54:19.715128   57716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:54:19.730660   57716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:54:19.743680   57716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:54:19.856717   57716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:54:20.006361   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:54:20.021419   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:54:20.038786   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.191836   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:54:20.201486   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 05:54:20.210685   57716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:54:20.210748   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:54:20.219896   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:54:20.228857   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:54:20.237489   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:54:20.246148   57716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:54:20.253998   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:54:20.262613   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:54:20.271236   57716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:54:20.280061   57716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:54:20.287623   57716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:54:20.295156   57716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:54:20.415485   57716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:54:20.529881   57716 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 05:54:20.529941   57716 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 05:54:20.533915   57716 start.go:564] Will wait 60s for crictl version
	I1210 05:54:20.533980   57716 ssh_runner.go:195] Run: which crictl
	I1210 05:54:20.537488   57716 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:54:20.562843   57716 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 05:54:20.562909   57716 ssh_runner.go:195] Run: containerd --version
	I1210 05:54:20.586515   57716 ssh_runner.go:195] Run: containerd --version
	I1210 05:54:20.613476   57716 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 05:54:20.616435   57716 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:54:20.632538   57716 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:54:20.639504   57716 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 05:54:20.642345   57716 kubeadm.go:884] updating cluster {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:54:20.642611   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.817647   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:20.968512   57716 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 05:54:21.117681   57716 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 05:54:21.117754   57716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:54:21.141602   57716 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 05:54:21.141614   57716 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:54:21.141620   57716 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1210 05:54:21.141710   57716 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:54:21.141768   57716 ssh_runner.go:195] Run: sudo crictl info
	I1210 05:54:21.167304   57716 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 05:54:21.167327   57716 cni.go:84] Creating CNI manager for ""
	I1210 05:54:21.167335   57716 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:54:21.167343   57716 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:54:21.167363   57716 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644034 NodeName:functional-644034 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:54:21.167468   57716 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-644034"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:54:21.167528   57716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:54:21.175157   57716 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:54:21.175220   57716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:54:21.182336   57716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 05:54:21.194714   57716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:54:21.206951   57716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1210 05:54:21.218855   57716 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:54:21.222543   57716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:54:21.341027   57716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:54:21.356762   57716 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034 for IP: 192.168.49.2
	I1210 05:54:21.356773   57716 certs.go:195] generating shared ca certs ...
	I1210 05:54:21.356789   57716 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:54:21.356923   57716 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 05:54:21.356964   57716 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 05:54:21.356970   57716 certs.go:257] generating profile certs ...
	I1210 05:54:21.357053   57716 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.key
	I1210 05:54:21.357114   57716 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key.40bc062c
	I1210 05:54:21.357152   57716 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key
	I1210 05:54:21.357258   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 05:54:21.357288   57716 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 05:54:21.357307   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 05:54:21.357333   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:54:21.357354   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:54:21.357375   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 05:54:21.357423   57716 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 05:54:21.357978   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:54:21.378744   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:54:21.397697   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:54:21.419957   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 05:54:21.438314   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:54:21.455834   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:54:21.473865   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:54:21.494612   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:54:21.512109   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:54:21.529720   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 05:54:21.547670   57716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 05:54:21.568707   57716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:54:21.582063   57716 ssh_runner.go:195] Run: openssl version
	I1210 05:54:21.588394   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.595862   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:54:21.603363   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.607193   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.607247   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:54:21.648234   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:54:21.655574   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.662804   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 05:54:21.670452   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.674182   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.674235   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 05:54:21.715273   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:54:21.722425   57716 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.729498   57716 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 05:54:21.736743   57716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.740323   57716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.740376   57716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 05:54:21.780972   57716 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:54:21.788152   57716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:54:21.791770   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:54:21.832469   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:54:21.875333   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:54:21.915959   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:54:21.956552   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:54:21.998157   57716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:54:22.041430   57716 kubeadm.go:401] StartCluster: {Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:22.041511   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 05:54:22.041600   57716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:54:22.071281   57716 cri.go:89] found id: ""
	I1210 05:54:22.071348   57716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:54:22.079286   57716 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:54:22.079296   57716 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:54:22.079350   57716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:54:22.086777   57716 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.087401   57716 kubeconfig.go:125] found "functional-644034" server: "https://192.168.49.2:8441"
	I1210 05:54:22.088728   57716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:54:22.096851   57716 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 05:39:45.645176984 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 05:54:21.211483495 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 05:54:22.096860   57716 kubeadm.go:1161] stopping kube-system containers ...
	I1210 05:54:22.096878   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1210 05:54:22.096937   57716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:54:22.122240   57716 cri.go:89] found id: ""
	I1210 05:54:22.122301   57716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 05:54:22.139987   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:54:22.147655   57716 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 05:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 10 05:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 05:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 05:43 /etc/kubernetes/scheduler.conf
	
	I1210 05:54:22.147725   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:54:22.155240   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:54:22.163328   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.163381   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:54:22.170477   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:54:22.178188   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.178242   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:54:22.185324   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:54:22.192557   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:54:22.192613   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:54:22.199756   57716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:54:22.207462   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:22.254516   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:23.834868   57716 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.580327189s)
	I1210 05:54:23.834928   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.033268   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.102476   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 05:54:24.150822   57716 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:54:24.150892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:24.651134   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:25.151026   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:25.651869   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:26.151216   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:26.651981   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:27.151958   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:27.651059   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:28.151711   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:28.651801   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:29.151170   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:29.651851   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:30.151157   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:30.651654   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:31.151084   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:31.651758   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:32.151508   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:32.651099   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:33.151680   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:33.651643   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:34.151101   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:34.651107   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:35.150988   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:35.651892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:36.151153   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:36.651103   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:37.151414   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:37.651563   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:38.151178   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:38.651401   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:39.150956   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:39.650979   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:40.151904   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:40.651104   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:41.151273   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:41.651040   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:42.151823   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:42.651144   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:43.151448   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:43.651999   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:44.151103   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:44.651111   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:45.151308   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:45.651953   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:46.151727   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:46.651656   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:47.151732   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:47.651342   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:48.151209   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:48.651132   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:49.151140   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:49.651706   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:50.151487   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:50.651289   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:51.150961   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:51.651096   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:52.150968   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:52.651629   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:53.151897   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:53.651111   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:54.151375   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:54.651108   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:55.151036   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:55.651733   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:56.151260   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:56.651152   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:57.150960   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:57.651169   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:58.151105   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:58.651487   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:59.151042   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:54:59.651058   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:00.151456   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:00.650980   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:01.151155   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:01.651260   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:02.151783   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:02.651522   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:03.151955   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:03.651242   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:04.151318   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:04.651176   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:05.151161   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:05.651848   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:06.151100   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:06.651828   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:07.151113   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:07.651938   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:08.151467   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:08.651101   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:09.151624   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:09.651209   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:10.151745   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:10.651031   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:11.151720   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:11.651857   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:12.151769   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:12.651470   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:13.151212   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:13.651104   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:14.151106   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:14.651144   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:15.151130   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:15.652008   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:16.151440   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:16.651880   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:17.151343   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:17.651404   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:18.150959   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:18.651272   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:19.151991   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:19.651605   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:20.151125   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:20.651248   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:21.151762   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:21.651604   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:22.151314   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:22.651440   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:23.151928   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:23.651890   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:24.151853   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:24.151952   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:24.176715   57716 cri.go:89] found id: ""
	I1210 05:55:24.176729   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.176736   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:24.176741   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:24.176801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:24.199798   57716 cri.go:89] found id: ""
	I1210 05:55:24.199811   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.199819   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:24.199824   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:24.199881   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:24.223446   57716 cri.go:89] found id: ""
	I1210 05:55:24.223459   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.223466   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:24.223471   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:24.223533   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:24.247963   57716 cri.go:89] found id: ""
	I1210 05:55:24.247976   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.247984   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:24.247989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:24.248052   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:24.271064   57716 cri.go:89] found id: ""
	I1210 05:55:24.271078   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.271085   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:24.271090   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:24.271156   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:24.295582   57716 cri.go:89] found id: ""
	I1210 05:55:24.295595   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.295603   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:24.295608   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:24.295665   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:24.319439   57716 cri.go:89] found id: ""
	I1210 05:55:24.319459   57716 logs.go:282] 0 containers: []
	W1210 05:55:24.319466   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:24.319474   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:24.319484   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:24.374536   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:24.374555   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:24.385677   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:24.385693   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:24.468968   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:24.460916   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.461690   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463385   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463760   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.465289   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:24.460916   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.461690   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463385   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.463760   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:24.465289   11324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:24.468989   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:24.469008   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:24.534097   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:24.534114   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:27.065851   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:27.076794   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:27.076855   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:27.102051   57716 cri.go:89] found id: ""
	I1210 05:55:27.102064   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.102072   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:27.102087   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:27.102159   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:27.125833   57716 cri.go:89] found id: ""
	I1210 05:55:27.125846   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.125853   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:27.125858   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:27.125916   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:27.150782   57716 cri.go:89] found id: ""
	I1210 05:55:27.150795   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.150803   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:27.150808   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:27.150870   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:27.177446   57716 cri.go:89] found id: ""
	I1210 05:55:27.177459   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.177467   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:27.177472   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:27.177530   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:27.202542   57716 cri.go:89] found id: ""
	I1210 05:55:27.202557   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.202564   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:27.202570   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:27.202631   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:27.229302   57716 cri.go:89] found id: ""
	I1210 05:55:27.229316   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.229323   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:27.229328   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:27.229389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:27.258140   57716 cri.go:89] found id: ""
	I1210 05:55:27.258154   57716 logs.go:282] 0 containers: []
	W1210 05:55:27.258162   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:27.258170   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:27.258179   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:27.313276   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:27.313296   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:27.324237   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:27.324252   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:27.386291   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:27.378930   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.379718   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381201   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381605   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.383124   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:27.378930   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.379718   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381201   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.381605   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:27.383124   11433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:27.386311   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:27.386321   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:27.451779   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:27.451797   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:29.984865   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:29.994990   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:29.995106   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:30.034785   57716 cri.go:89] found id: ""
	I1210 05:55:30.034800   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.034808   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:30.034815   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:30.034899   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:30.063792   57716 cri.go:89] found id: ""
	I1210 05:55:30.063807   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.063816   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:30.063821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:30.063895   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:30.095916   57716 cri.go:89] found id: ""
	I1210 05:55:30.095931   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.095939   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:30.095945   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:30.096020   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:30.123266   57716 cri.go:89] found id: ""
	I1210 05:55:30.123293   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.123300   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:30.123306   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:30.123378   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:30.149145   57716 cri.go:89] found id: ""
	I1210 05:55:30.149159   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.149167   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:30.149173   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:30.149231   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:30.178515   57716 cri.go:89] found id: ""
	I1210 05:55:30.178529   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.178536   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:30.178541   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:30.178601   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:30.202938   57716 cri.go:89] found id: ""
	I1210 05:55:30.202952   57716 logs.go:282] 0 containers: []
	W1210 05:55:30.202959   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:30.202968   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:30.202977   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:30.262024   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:30.262042   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:30.273395   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:30.273411   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:30.339082   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:30.331422   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.332246   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.333884   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.334216   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.335714   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:30.331422   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.332246   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.333884   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.334216   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:30.335714   11538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:30.339099   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:30.339111   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:30.401574   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:30.401599   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:32.947286   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:32.957296   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:32.957360   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:32.982165   57716 cri.go:89] found id: ""
	I1210 05:55:32.982179   57716 logs.go:282] 0 containers: []
	W1210 05:55:32.982186   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:32.982191   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:32.982247   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:33.020504   57716 cri.go:89] found id: ""
	I1210 05:55:33.020517   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.020525   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:33.020530   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:33.020590   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:33.045171   57716 cri.go:89] found id: ""
	I1210 05:55:33.045185   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.045193   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:33.045198   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:33.045261   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:33.069898   57716 cri.go:89] found id: ""
	I1210 05:55:33.069923   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.069931   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:33.069936   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:33.070003   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:33.094592   57716 cri.go:89] found id: ""
	I1210 05:55:33.094607   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.094614   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:33.094619   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:33.094687   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:33.119752   57716 cri.go:89] found id: ""
	I1210 05:55:33.119765   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.119772   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:33.119778   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:33.119842   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:33.144728   57716 cri.go:89] found id: ""
	I1210 05:55:33.144742   57716 logs.go:282] 0 containers: []
	W1210 05:55:33.144749   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:33.144757   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:33.144767   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:33.202510   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:33.202527   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:33.213898   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:33.213914   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:33.276996   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:33.269599   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.270004   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.271689   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.272071   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.273649   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:33.269599   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.270004   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.271689   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.272071   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:33.273649   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:33.277006   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:33.277016   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:33.337654   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:33.337675   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:35.867520   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:35.877494   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:35.877552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:35.903487   57716 cri.go:89] found id: ""
	I1210 05:55:35.903501   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.903508   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:35.903514   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:35.903571   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:35.933040   57716 cri.go:89] found id: ""
	I1210 05:55:35.933054   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.933060   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:35.933066   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:35.933150   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:35.956439   57716 cri.go:89] found id: ""
	I1210 05:55:35.956453   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.956460   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:35.956466   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:35.956522   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:35.983120   57716 cri.go:89] found id: ""
	I1210 05:55:35.983133   57716 logs.go:282] 0 containers: []
	W1210 05:55:35.983140   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:35.983155   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:35.983213   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:36.024072   57716 cri.go:89] found id: ""
	I1210 05:55:36.024085   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.024093   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:36.024098   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:36.024163   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:36.050259   57716 cri.go:89] found id: ""
	I1210 05:55:36.050282   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.050289   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:36.050296   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:36.050375   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:36.079897   57716 cri.go:89] found id: ""
	I1210 05:55:36.079911   57716 logs.go:282] 0 containers: []
	W1210 05:55:36.079918   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:36.079925   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:36.079935   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:36.109390   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:36.109405   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:36.164390   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:36.164407   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:36.175368   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:36.175383   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:36.247833   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:36.240230   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.240985   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.242571   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.243126   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.244643   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:36.240230   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.240985   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.242571   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.243126   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:36.244643   11760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:36.247845   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:36.247855   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:38.808939   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:38.819051   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:38.819128   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:38.843620   57716 cri.go:89] found id: ""
	I1210 05:55:38.843643   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.843650   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:38.843656   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:38.843713   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:38.872120   57716 cri.go:89] found id: ""
	I1210 05:55:38.872134   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.872141   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:38.872147   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:38.872204   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:38.896725   57716 cri.go:89] found id: ""
	I1210 05:55:38.896738   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.896746   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:38.896751   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:38.896807   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:38.924643   57716 cri.go:89] found id: ""
	I1210 05:55:38.924657   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.924665   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:38.924670   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:38.924729   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:38.952693   57716 cri.go:89] found id: ""
	I1210 05:55:38.952706   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.952714   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:38.952719   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:38.952774   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:38.976175   57716 cri.go:89] found id: ""
	I1210 05:55:38.976189   57716 logs.go:282] 0 containers: []
	W1210 05:55:38.976196   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:38.976201   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:38.976266   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:39.001657   57716 cri.go:89] found id: ""
	I1210 05:55:39.001671   57716 logs.go:282] 0 containers: []
	W1210 05:55:39.001678   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:39.001686   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:39.001698   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:39.013220   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:39.013240   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:39.084372   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:39.073429   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.077278   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.078119   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.079595   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.080038   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:39.073429   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.077278   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.078119   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.079595   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:39.080038   11853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:39.084383   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:39.084393   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:39.145338   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:39.145357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:39.173909   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:39.173925   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:41.731159   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:41.741270   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:41.741329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:41.765933   57716 cri.go:89] found id: ""
	I1210 05:55:41.765946   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.765953   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:41.765958   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:41.766034   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:41.790822   57716 cri.go:89] found id: ""
	I1210 05:55:41.790842   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.790850   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:41.790855   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:41.790924   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:41.817287   57716 cri.go:89] found id: ""
	I1210 05:55:41.817300   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.817312   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:41.817318   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:41.817386   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:41.842964   57716 cri.go:89] found id: ""
	I1210 05:55:41.842978   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.842986   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:41.842991   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:41.843068   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:41.871615   57716 cri.go:89] found id: ""
	I1210 05:55:41.871629   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.871637   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:41.871642   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:41.871699   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:41.896188   57716 cri.go:89] found id: ""
	I1210 05:55:41.896216   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.896223   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:41.896229   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:41.896294   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:41.930282   57716 cri.go:89] found id: ""
	I1210 05:55:41.930296   57716 logs.go:282] 0 containers: []
	W1210 05:55:41.930303   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:41.930311   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:41.930320   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:41.985380   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:41.985397   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:42.004532   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:42.004551   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:42.075101   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:42.065585   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.066618   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.068626   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.069338   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.071222   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:42.065585   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.066618   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.068626   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.069338   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:42.071222   11962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:42.075129   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:42.075143   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:42.145894   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:42.145929   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:44.679885   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:44.690876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:44.690937   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:44.720897   57716 cri.go:89] found id: ""
	I1210 05:55:44.720911   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.720918   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:44.720923   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:44.720983   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:44.745408   57716 cri.go:89] found id: ""
	I1210 05:55:44.745421   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.745427   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:44.745432   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:44.745495   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:44.773707   57716 cri.go:89] found id: ""
	I1210 05:55:44.773721   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.773728   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:44.773733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:44.773792   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:44.798508   57716 cri.go:89] found id: ""
	I1210 05:55:44.798522   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.798529   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:44.798535   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:44.798597   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:44.822493   57716 cri.go:89] found id: ""
	I1210 05:55:44.822507   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.822515   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:44.822519   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:44.822578   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:44.847294   57716 cri.go:89] found id: ""
	I1210 05:55:44.847308   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.847316   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:44.847321   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:44.847380   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:44.870447   57716 cri.go:89] found id: ""
	I1210 05:55:44.870460   57716 logs.go:282] 0 containers: []
	W1210 05:55:44.870468   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:44.870475   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:44.870485   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:44.926160   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:44.926177   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:44.937022   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:44.937037   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:45.007191   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:44.990257   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.990902   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992455   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992971   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:45.000009   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:44.990257   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.990902   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992455   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:44.992971   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:45.000009   12065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:45.007203   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:45.007215   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:45.103439   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:45.103467   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:47.653520   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:47.663666   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:47.663731   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:47.697444   57716 cri.go:89] found id: ""
	I1210 05:55:47.697457   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.697464   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:47.697469   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:47.697529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:47.728308   57716 cri.go:89] found id: ""
	I1210 05:55:47.728322   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.728329   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:47.728334   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:47.728391   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:47.753518   57716 cri.go:89] found id: ""
	I1210 05:55:47.753531   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.753538   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:47.753543   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:47.753600   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:47.777296   57716 cri.go:89] found id: ""
	I1210 05:55:47.777309   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.777316   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:47.777322   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:47.777378   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:47.800977   57716 cri.go:89] found id: ""
	I1210 05:55:47.800998   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.801005   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:47.801010   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:47.801067   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:47.825052   57716 cri.go:89] found id: ""
	I1210 05:55:47.825065   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.825073   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:47.825078   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:47.825147   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:47.848863   57716 cri.go:89] found id: ""
	I1210 05:55:47.848876   57716 logs.go:282] 0 containers: []
	W1210 05:55:47.848883   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:47.848892   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:47.848902   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:47.905124   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:47.905139   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:47.915783   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:47.915800   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:47.980730   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:47.973198   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.973889   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.975569   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.976034   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.977547   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:47.973198   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.973889   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.975569   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.976034   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:47.977547   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:47.980740   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:47.980750   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:48.042937   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:48.042955   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:50.581353   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:50.591210   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:50.591269   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:50.620774   57716 cri.go:89] found id: ""
	I1210 05:55:50.620788   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.620794   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:50.620800   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:50.620864   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:50.645050   57716 cri.go:89] found id: ""
	I1210 05:55:50.645064   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.645071   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:50.645082   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:50.645146   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:50.679878   57716 cri.go:89] found id: ""
	I1210 05:55:50.679890   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.679897   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:50.679903   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:50.679960   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:50.710005   57716 cri.go:89] found id: ""
	I1210 05:55:50.710018   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.710026   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:50.710032   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:50.710088   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:50.744288   57716 cri.go:89] found id: ""
	I1210 05:55:50.744302   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.744311   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:50.744317   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:50.744373   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:50.767954   57716 cri.go:89] found id: ""
	I1210 05:55:50.767967   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.767974   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:50.767980   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:50.768037   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:50.796157   57716 cri.go:89] found id: ""
	I1210 05:55:50.796171   57716 logs.go:282] 0 containers: []
	W1210 05:55:50.796179   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:50.796186   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:50.796196   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:50.851621   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:50.851638   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:50.863074   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:50.863091   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:50.939619   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:50.930950   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.931767   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.933629   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.934152   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.935732   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:50.930950   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.931767   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.933629   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.934152   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:50.935732   12273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:50.939629   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:50.939639   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:51.008577   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:51.008598   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:53.537065   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:53.546821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:53.546878   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:53.571853   57716 cri.go:89] found id: ""
	I1210 05:55:53.571867   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.571874   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:53.571879   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:53.571937   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:53.595941   57716 cri.go:89] found id: ""
	I1210 05:55:53.595955   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.595962   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:53.595967   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:53.596023   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:53.620466   57716 cri.go:89] found id: ""
	I1210 05:55:53.620480   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.620486   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:53.620492   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:53.620546   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:53.643628   57716 cri.go:89] found id: ""
	I1210 05:55:53.643641   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.643649   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:53.643655   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:53.643711   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:53.673517   57716 cri.go:89] found id: ""
	I1210 05:55:53.673532   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.673539   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:53.673545   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:53.673601   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:53.709885   57716 cri.go:89] found id: ""
	I1210 05:55:53.709899   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.709906   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:53.709911   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:53.709974   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:53.739765   57716 cri.go:89] found id: ""
	I1210 05:55:53.739778   57716 logs.go:282] 0 containers: []
	W1210 05:55:53.739785   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:53.739792   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:53.739802   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:53.795061   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:53.795080   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:53.806101   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:53.806117   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:53.872226   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:53.863177   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.863668   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865274   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865802   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.867416   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:53.863177   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.863668   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865274   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.865802   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:53.867416   12374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:53.872238   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:53.872248   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:53.933601   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:53.933619   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:56.466912   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:56.476796   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:56.476855   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:56.501021   57716 cri.go:89] found id: ""
	I1210 05:55:56.501035   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.501042   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:56.501048   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:56.501109   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:56.524562   57716 cri.go:89] found id: ""
	I1210 05:55:56.524576   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.524583   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:56.524588   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:56.524644   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:56.547648   57716 cri.go:89] found id: ""
	I1210 05:55:56.547662   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.547669   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:56.547674   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:56.547730   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:56.576863   57716 cri.go:89] found id: ""
	I1210 05:55:56.576876   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.576883   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:56.576895   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:56.576956   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:56.600963   57716 cri.go:89] found id: ""
	I1210 05:55:56.600977   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.600984   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:56.600989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:56.601049   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:56.624726   57716 cri.go:89] found id: ""
	I1210 05:55:56.624739   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.624747   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:56.624755   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:56.624816   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:56.657236   57716 cri.go:89] found id: ""
	I1210 05:55:56.657249   57716 logs.go:282] 0 containers: []
	W1210 05:55:56.657261   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:56.657270   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:56.657280   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:55:56.697559   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:56.697576   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:56.757986   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:56.758004   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:56.769563   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:56.769579   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:56.830223   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:56.822784   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.823676   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825199   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825498   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.826965   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:56.822784   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.823676   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825199   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.825498   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:56.826965   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:56.830233   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:56.830243   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:59.393208   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:55:59.403384   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:55:59.403452   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:55:59.428722   57716 cri.go:89] found id: ""
	I1210 05:55:59.428749   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.428757   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:55:59.428763   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:55:59.428833   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:55:59.453874   57716 cri.go:89] found id: ""
	I1210 05:55:59.453887   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.453895   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:55:59.453901   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:55:59.453962   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:55:59.478240   57716 cri.go:89] found id: ""
	I1210 05:55:59.478253   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.478260   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:55:59.478271   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:55:59.478329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:55:59.502468   57716 cri.go:89] found id: ""
	I1210 05:55:59.502482   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.502489   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:55:59.502494   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:55:59.502554   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:55:59.526784   57716 cri.go:89] found id: ""
	I1210 05:55:59.526797   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.526804   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:55:59.526809   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:55:59.526872   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:55:59.552473   57716 cri.go:89] found id: ""
	I1210 05:55:59.552486   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.552493   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:55:59.552499   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:55:59.552552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:55:59.576249   57716 cri.go:89] found id: ""
	I1210 05:55:59.576262   57716 logs.go:282] 0 containers: []
	W1210 05:55:59.576269   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:55:59.576276   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:55:59.576288   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:55:59.631147   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:55:59.631169   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:55:59.642052   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:55:59.642067   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:55:59.721714   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:55:59.711733   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.712627   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714378   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714692   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.716946   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:55:59.711733   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.712627   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714378   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.714692   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:55:59.716946   12576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:55:59.721733   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:55:59.721745   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:55:59.783216   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:55:59.783235   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:02.312967   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:02.323213   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:02.323279   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:02.347978   57716 cri.go:89] found id: ""
	I1210 05:56:02.347992   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.348011   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:02.348017   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:02.348073   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:02.372899   57716 cri.go:89] found id: ""
	I1210 05:56:02.372912   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.372920   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:02.372926   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:02.372985   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:02.396971   57716 cri.go:89] found id: ""
	I1210 05:56:02.396985   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.396992   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:02.396997   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:02.397057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:02.422416   57716 cri.go:89] found id: ""
	I1210 05:56:02.422430   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.422437   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:02.422443   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:02.422501   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:02.447977   57716 cri.go:89] found id: ""
	I1210 05:56:02.447990   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.448004   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:02.448009   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:02.448066   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:02.471774   57716 cri.go:89] found id: ""
	I1210 05:56:02.471788   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.471795   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:02.471800   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:02.471857   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:02.496057   57716 cri.go:89] found id: ""
	I1210 05:56:02.496072   57716 logs.go:282] 0 containers: []
	W1210 05:56:02.496079   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:02.496088   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:02.496098   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:02.523576   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:02.523592   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:02.579266   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:02.579296   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:02.590792   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:02.590809   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:02.657064   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:02.648570   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.649296   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651116   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651750   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.653344   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:02.648570   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.649296   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651116   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.651750   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:02.653344   12694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:02.657075   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:02.657085   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:05.229868   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:05.239953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:05.240012   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:05.264605   57716 cri.go:89] found id: ""
	I1210 05:56:05.264618   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.264626   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:05.264631   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:05.264689   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:05.288264   57716 cri.go:89] found id: ""
	I1210 05:56:05.288277   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.288285   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:05.288290   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:05.288354   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:05.313427   57716 cri.go:89] found id: ""
	I1210 05:56:05.313441   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.313448   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:05.313454   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:05.313510   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:05.344659   57716 cri.go:89] found id: ""
	I1210 05:56:05.344673   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.344680   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:05.344686   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:05.344743   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:05.369600   57716 cri.go:89] found id: ""
	I1210 05:56:05.369614   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.369621   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:05.369626   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:05.369683   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:05.397066   57716 cri.go:89] found id: ""
	I1210 05:56:05.397080   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.397088   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:05.397093   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:05.397150   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:05.422728   57716 cri.go:89] found id: ""
	I1210 05:56:05.422744   57716 logs.go:282] 0 containers: []
	W1210 05:56:05.422751   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:05.422759   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:05.422770   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:05.485204   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:05.477114   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.477952   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479558   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479866   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.481321   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:05.477114   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.477952   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479558   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.479866   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:05.481321   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:05.485215   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:05.485227   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:05.547693   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:05.547712   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:05.580471   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:05.580488   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:05.639350   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:05.639369   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:08.151149   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:08.162270   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:08.162351   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:08.189435   57716 cri.go:89] found id: ""
	I1210 05:56:08.189448   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.189455   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:08.189465   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:08.189530   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:08.218992   57716 cri.go:89] found id: ""
	I1210 05:56:08.219006   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.219031   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:08.219042   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:08.219100   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:08.245141   57716 cri.go:89] found id: ""
	I1210 05:56:08.245153   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.245160   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:08.245165   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:08.245221   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:08.273294   57716 cri.go:89] found id: ""
	I1210 05:56:08.273307   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.273314   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:08.273319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:08.273382   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:08.298396   57716 cri.go:89] found id: ""
	I1210 05:56:08.298410   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.298417   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:08.298422   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:08.298482   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:08.322670   57716 cri.go:89] found id: ""
	I1210 05:56:08.322684   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.322691   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:08.322696   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:08.322753   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:08.347986   57716 cri.go:89] found id: ""
	I1210 05:56:08.348000   57716 logs.go:282] 0 containers: []
	W1210 05:56:08.348007   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:08.348015   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:08.348024   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:08.411052   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:08.411070   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:08.438849   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:08.438865   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:08.496560   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:08.496587   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:08.507905   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:08.507921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:08.573377   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:08.565623   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.566145   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.567826   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.568336   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.569867   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:08.565623   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.566145   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.567826   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.568336   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:08.569867   12907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:11.073585   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:11.083689   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:11.083757   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:11.108541   57716 cri.go:89] found id: ""
	I1210 05:56:11.108620   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.108628   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:11.108634   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:11.108694   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:11.134331   57716 cri.go:89] found id: ""
	I1210 05:56:11.134346   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.134353   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:11.134358   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:11.134417   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:11.158615   57716 cri.go:89] found id: ""
	I1210 05:56:11.158628   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.158635   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:11.158640   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:11.158698   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:11.183689   57716 cri.go:89] found id: ""
	I1210 05:56:11.183703   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.183710   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:11.183716   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:11.183775   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:11.207798   57716 cri.go:89] found id: ""
	I1210 05:56:11.207812   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.207819   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:11.207825   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:11.207882   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:11.236712   57716 cri.go:89] found id: ""
	I1210 05:56:11.236726   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.236734   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:11.236739   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:11.236801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:11.260759   57716 cri.go:89] found id: ""
	I1210 05:56:11.260773   57716 logs.go:282] 0 containers: []
	W1210 05:56:11.260780   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:11.260788   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:11.260798   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:11.289769   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:11.289786   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:11.354319   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:11.354343   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:11.365879   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:11.365896   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:11.429322   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:11.420840   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.421615   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.423423   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.424052   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.425736   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:11.420840   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.421615   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.423423   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.424052   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:11.425736   13010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:11.429334   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:11.429347   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:13.992257   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:14.005684   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:14.005747   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:14.031213   57716 cri.go:89] found id: ""
	I1210 05:56:14.031233   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.031241   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:14.031246   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:14.031308   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:14.055927   57716 cri.go:89] found id: ""
	I1210 05:56:14.055941   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.055948   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:14.055953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:14.056011   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:14.080687   57716 cri.go:89] found id: ""
	I1210 05:56:14.080700   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.080707   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:14.080712   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:14.080770   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:14.108973   57716 cri.go:89] found id: ""
	I1210 05:56:14.108986   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.108993   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:14.108999   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:14.109057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:14.138949   57716 cri.go:89] found id: ""
	I1210 05:56:14.138963   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.138971   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:14.138976   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:14.139058   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:14.162184   57716 cri.go:89] found id: ""
	I1210 05:56:14.162199   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.162206   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:14.162211   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:14.162267   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:14.186846   57716 cri.go:89] found id: ""
	I1210 05:56:14.186859   57716 logs.go:282] 0 containers: []
	W1210 05:56:14.186866   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:14.186874   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:14.186885   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:14.214982   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:14.214998   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:14.272262   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:14.272279   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:14.283290   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:14.283306   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:14.343519   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:14.335616   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.336321   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338030   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338568   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.340121   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:14.335616   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.336321   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338030   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.338568   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:14.340121   13117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:14.343530   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:14.343541   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:16.905886   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:16.915932   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:16.915991   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:16.943689   57716 cri.go:89] found id: ""
	I1210 05:56:16.943703   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.943710   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:16.943715   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:16.943772   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:16.971692   57716 cri.go:89] found id: ""
	I1210 05:56:16.971705   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.971712   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:16.971717   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:16.971774   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:16.998705   57716 cri.go:89] found id: ""
	I1210 05:56:16.998721   57716 logs.go:282] 0 containers: []
	W1210 05:56:16.998729   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:16.998734   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:16.998805   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:17.028716   57716 cri.go:89] found id: ""
	I1210 05:56:17.028730   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.028737   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:17.028743   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:17.028810   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:17.056330   57716 cri.go:89] found id: ""
	I1210 05:56:17.056344   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.056351   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:17.056355   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:17.056412   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:17.084606   57716 cri.go:89] found id: ""
	I1210 05:56:17.084620   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.084627   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:17.084633   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:17.084690   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:17.108463   57716 cri.go:89] found id: ""
	I1210 05:56:17.108476   57716 logs.go:282] 0 containers: []
	W1210 05:56:17.108484   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:17.108492   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:17.108502   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:17.119206   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:17.119223   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:17.184513   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:17.176815   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.177383   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.178877   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.179482   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.181206   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:17.176815   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.177383   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.178877   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.179482   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:17.181206   13209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:17.184523   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:17.184533   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:17.249050   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:17.249068   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:17.277433   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:17.277448   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:19.835189   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:19.845211   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:19.845270   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:19.869437   57716 cri.go:89] found id: ""
	I1210 05:56:19.869451   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.869457   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:19.869463   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:19.869525   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:19.893666   57716 cri.go:89] found id: ""
	I1210 05:56:19.893680   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.893687   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:19.893691   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:19.893746   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:19.925851   57716 cri.go:89] found id: ""
	I1210 05:56:19.925864   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.925871   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:19.925876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:19.925934   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:19.953268   57716 cri.go:89] found id: ""
	I1210 05:56:19.953283   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.953289   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:19.953295   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:19.953352   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:19.980541   57716 cri.go:89] found id: ""
	I1210 05:56:19.980555   57716 logs.go:282] 0 containers: []
	W1210 05:56:19.980562   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:19.980567   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:19.980629   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:20.014350   57716 cri.go:89] found id: ""
	I1210 05:56:20.014365   57716 logs.go:282] 0 containers: []
	W1210 05:56:20.014383   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:20.014389   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:20.014463   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:20.040904   57716 cri.go:89] found id: ""
	I1210 05:56:20.040918   57716 logs.go:282] 0 containers: []
	W1210 05:56:20.040926   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:20.040933   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:20.040943   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:20.097054   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:20.097072   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:20.108443   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:20.108459   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:20.173764   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:20.164932   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166475   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166965   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168506   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168930   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:20.164932   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166475   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.166965   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168506   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:20.168930   13319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:20.173773   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:20.173784   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:20.235116   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:20.235134   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:22.763516   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:22.773433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:22.773490   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:22.797542   57716 cri.go:89] found id: ""
	I1210 05:56:22.797556   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.797562   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:22.797568   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:22.797622   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:22.821893   57716 cri.go:89] found id: ""
	I1210 05:56:22.821907   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.821915   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:22.821920   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:22.821976   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:22.850542   57716 cri.go:89] found id: ""
	I1210 05:56:22.850557   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.850564   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:22.850569   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:22.850627   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:22.875288   57716 cri.go:89] found id: ""
	I1210 05:56:22.875301   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.875314   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:22.875320   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:22.875376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:22.900725   57716 cri.go:89] found id: ""
	I1210 05:56:22.900739   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.900747   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:22.900752   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:22.900808   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:22.931217   57716 cri.go:89] found id: ""
	I1210 05:56:22.931230   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.931237   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:22.931243   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:22.931309   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:22.963506   57716 cri.go:89] found id: ""
	I1210 05:56:22.963519   57716 logs.go:282] 0 containers: []
	W1210 05:56:22.963525   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:22.963533   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:22.963542   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:23.025625   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:23.025643   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:23.036825   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:23.036841   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:23.100693   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:23.092404   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.093143   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.094913   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.095571   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.097307   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:23.092404   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.093143   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.094913   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.095571   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:23.097307   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:23.100703   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:23.100715   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:23.160995   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:23.161014   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:25.690455   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:25.700306   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:25.700369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:25.725916   57716 cri.go:89] found id: ""
	I1210 05:56:25.725931   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.725942   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:25.725948   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:25.726009   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:25.749914   57716 cri.go:89] found id: ""
	I1210 05:56:25.749927   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.749935   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:25.749939   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:25.749998   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:25.776070   57716 cri.go:89] found id: ""
	I1210 05:56:25.776083   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.776090   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:25.776095   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:25.776154   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:25.799518   57716 cri.go:89] found id: ""
	I1210 05:56:25.799532   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.799540   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:25.799546   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:25.799608   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:25.822990   57716 cri.go:89] found id: ""
	I1210 05:56:25.823057   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.823064   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:25.823072   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:25.823138   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:25.847416   57716 cri.go:89] found id: ""
	I1210 05:56:25.847430   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.847437   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:25.847442   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:25.847500   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:25.871819   57716 cri.go:89] found id: ""
	I1210 05:56:25.871833   57716 logs.go:282] 0 containers: []
	W1210 05:56:25.871840   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:25.871849   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:25.871861   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:25.882590   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:25.882607   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:25.975908   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:25.961777   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.962673   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967132   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967485   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.972482   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:25.961777   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.962673   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967132   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.967485   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:25.972482   13526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:25.975918   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:25.975929   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:26.042569   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:26.042588   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:26.070803   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:26.070819   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:28.629575   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:28.639457   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:28.639513   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:28.663811   57716 cri.go:89] found id: ""
	I1210 05:56:28.663824   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.663832   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:28.663837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:28.663892   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:28.688455   57716 cri.go:89] found id: ""
	I1210 05:56:28.688469   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.688476   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:28.688481   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:28.688538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:28.711872   57716 cri.go:89] found id: ""
	I1210 05:56:28.711886   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.711893   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:28.711898   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:28.711955   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:28.736153   57716 cri.go:89] found id: ""
	I1210 05:56:28.736166   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.736173   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:28.736181   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:28.736242   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:28.759991   57716 cri.go:89] found id: ""
	I1210 05:56:28.760011   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.760018   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:28.760023   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:28.760080   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:28.784928   57716 cri.go:89] found id: ""
	I1210 05:56:28.784942   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.784949   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:28.784955   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:28.785011   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:28.808330   57716 cri.go:89] found id: ""
	I1210 05:56:28.808343   57716 logs.go:282] 0 containers: []
	W1210 05:56:28.808350   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:28.808359   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:28.808368   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:28.864140   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:28.864158   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:28.874997   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:28.875030   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:28.946271   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:28.938223   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.939058   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.940712   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.941043   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.942516   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:28.938223   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.939058   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.940712   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.941043   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:28.942516   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:28.946281   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:28.946291   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:29.015729   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:29.015750   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:31.546248   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:31.557000   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:31.557057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:31.581315   57716 cri.go:89] found id: ""
	I1210 05:56:31.581329   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.581336   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:31.581342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:31.581397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:31.606297   57716 cri.go:89] found id: ""
	I1210 05:56:31.606312   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.606327   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:31.606332   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:31.606389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:31.630600   57716 cri.go:89] found id: ""
	I1210 05:56:31.630614   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.630621   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:31.630627   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:31.630684   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:31.658929   57716 cri.go:89] found id: ""
	I1210 05:56:31.658942   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.658949   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:31.658955   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:31.659042   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:31.684421   57716 cri.go:89] found id: ""
	I1210 05:56:31.684434   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.684441   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:31.684456   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:31.684529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:31.708593   57716 cri.go:89] found id: ""
	I1210 05:56:31.708607   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.708614   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:31.708620   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:31.708678   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:31.733389   57716 cri.go:89] found id: ""
	I1210 05:56:31.733403   57716 logs.go:282] 0 containers: []
	W1210 05:56:31.733411   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:31.733419   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:31.733429   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:31.762157   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:31.762171   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:31.818205   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:31.818222   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:31.829166   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:31.829182   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:31.894733   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:31.886837   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.887553   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889191   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889735   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.891344   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:31.886837   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.887553   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889191   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.889735   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:31.891344   13754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:31.894745   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:31.894756   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:34.466636   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:34.477387   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:34.477462   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:34.508975   57716 cri.go:89] found id: ""
	I1210 05:56:34.508989   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.508996   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:34.509002   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:34.509058   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:34.536397   57716 cri.go:89] found id: ""
	I1210 05:56:34.536410   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.536417   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:34.536424   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:34.536482   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:34.560872   57716 cri.go:89] found id: ""
	I1210 05:56:34.560885   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.560892   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:34.560898   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:34.560959   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:34.585436   57716 cri.go:89] found id: ""
	I1210 05:56:34.585450   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.585457   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:34.585463   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:34.585520   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:34.609983   57716 cri.go:89] found id: ""
	I1210 05:56:34.609997   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.610004   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:34.610010   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:34.610065   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:34.634652   57716 cri.go:89] found id: ""
	I1210 05:56:34.634666   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.634674   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:34.634679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:34.634737   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:34.660417   57716 cri.go:89] found id: ""
	I1210 05:56:34.660431   57716 logs.go:282] 0 containers: []
	W1210 05:56:34.660438   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:34.660446   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:34.660468   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:34.715849   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:34.715870   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:34.726672   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:34.726687   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:34.788897   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:34.781210   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.781759   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783378   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783973   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.785508   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:34.781210   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.781759   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783378   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.783973   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:34.785508   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:34.788907   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:34.788917   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:34.850671   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:34.850690   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:37.378067   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:37.388018   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:37.388079   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:37.415590   57716 cri.go:89] found id: ""
	I1210 05:56:37.415604   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.415611   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:37.415617   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:37.415679   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:37.443166   57716 cri.go:89] found id: ""
	I1210 05:56:37.443179   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.443186   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:37.443192   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:37.443248   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:37.466187   57716 cri.go:89] found id: ""
	I1210 05:56:37.466201   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.466208   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:37.466214   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:37.466271   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:37.492297   57716 cri.go:89] found id: ""
	I1210 05:56:37.492321   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.492329   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:37.492335   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:37.492389   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:37.515998   57716 cri.go:89] found id: ""
	I1210 05:56:37.516012   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.516018   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:37.516024   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:37.516083   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:37.540490   57716 cri.go:89] found id: ""
	I1210 05:56:37.540503   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.540510   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:37.540516   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:37.540576   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:37.565092   57716 cri.go:89] found id: ""
	I1210 05:56:37.565105   57716 logs.go:282] 0 containers: []
	W1210 05:56:37.565111   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:37.565119   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:37.565137   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:37.625814   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:37.625837   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:37.637078   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:37.637104   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:37.697146   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:37.689936   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.690349   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691533   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691938   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.693652   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:37.689936   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.690349   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691533   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.691938   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:37.693652   13954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:37.697156   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:37.697182   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:37.757019   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:37.757038   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:40.287595   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:40.298582   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:40.298641   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:40.322470   57716 cri.go:89] found id: ""
	I1210 05:56:40.322484   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.322491   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:40.322497   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:40.322552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:40.346764   57716 cri.go:89] found id: ""
	I1210 05:56:40.346778   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.346785   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:40.346790   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:40.346851   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:40.373286   57716 cri.go:89] found id: ""
	I1210 05:56:40.373300   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.373307   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:40.373313   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:40.373372   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:40.402348   57716 cri.go:89] found id: ""
	I1210 05:56:40.402361   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.402368   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:40.402373   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:40.402428   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:40.427030   57716 cri.go:89] found id: ""
	I1210 05:56:40.427044   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.427052   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:40.427057   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:40.427117   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:40.451451   57716 cri.go:89] found id: ""
	I1210 05:56:40.451478   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.451485   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:40.451491   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:40.451554   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:40.480083   57716 cri.go:89] found id: ""
	I1210 05:56:40.480100   57716 logs.go:282] 0 containers: []
	W1210 05:56:40.480106   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:40.480114   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:40.480124   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:40.490894   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:40.490909   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:40.556681   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:40.549171   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.549844   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551479   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551814   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.553287   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:40.549171   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.549844   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551479   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.551814   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:40.553287   14056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:40.556692   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:40.556702   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:40.619424   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:40.619443   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:40.652592   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:40.652608   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:43.210686   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:43.221608   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:43.221673   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:43.249950   57716 cri.go:89] found id: ""
	I1210 05:56:43.249964   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.249971   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:43.249977   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:43.250038   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:43.276671   57716 cri.go:89] found id: ""
	I1210 05:56:43.276685   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.276692   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:43.276697   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:43.276752   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:43.301078   57716 cri.go:89] found id: ""
	I1210 05:56:43.301092   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.301099   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:43.301105   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:43.301166   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:43.325712   57716 cri.go:89] found id: ""
	I1210 05:56:43.325725   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.325732   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:43.325753   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:43.325807   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:43.350013   57716 cri.go:89] found id: ""
	I1210 05:56:43.350027   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.350034   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:43.350039   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:43.350095   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:43.374239   57716 cri.go:89] found id: ""
	I1210 05:56:43.374253   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.374259   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:43.374265   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:43.374325   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:43.398684   57716 cri.go:89] found id: ""
	I1210 05:56:43.398697   57716 logs.go:282] 0 containers: []
	W1210 05:56:43.398704   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:43.398713   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:43.398723   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:43.429674   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:43.429692   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:43.486606   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:43.486624   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:43.497851   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:43.497867   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:43.564988   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:43.556980   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.557595   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559286   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559906   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.561769   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:43.556980   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.557595   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559286   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.559906   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:43.561769   14172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:43.565001   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:43.565011   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:46.128659   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:46.139799   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:46.139857   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:46.169381   57716 cri.go:89] found id: ""
	I1210 05:56:46.169395   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.169402   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:46.169408   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:46.169468   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:46.198882   57716 cri.go:89] found id: ""
	I1210 05:56:46.198896   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.198903   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:46.198909   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:46.198966   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:46.234049   57716 cri.go:89] found id: ""
	I1210 05:56:46.234064   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.234072   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:46.234077   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:46.234134   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:46.260031   57716 cri.go:89] found id: ""
	I1210 05:56:46.260044   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.260051   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:46.260057   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:46.260112   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:46.284339   57716 cri.go:89] found id: ""
	I1210 05:56:46.284353   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.284361   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:46.284366   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:46.284425   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:46.309943   57716 cri.go:89] found id: ""
	I1210 05:56:46.309957   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.309964   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:46.309970   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:46.310026   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:46.335200   57716 cri.go:89] found id: ""
	I1210 05:56:46.335215   57716 logs.go:282] 0 containers: []
	W1210 05:56:46.335222   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:46.335235   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:46.335247   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:46.391563   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:46.391580   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:46.403485   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:46.403501   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:46.469778   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:46.461822   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.462325   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464066   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464772   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.466293   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:46.461822   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.462325   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464066   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.464772   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:46.466293   14263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:46.469787   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:46.469798   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:46.533492   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:46.533510   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:49.061494   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:49.071430   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:49.071494   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:49.094941   57716 cri.go:89] found id: ""
	I1210 05:56:49.094961   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.094969   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:49.094974   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:49.095053   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:49.119980   57716 cri.go:89] found id: ""
	I1210 05:56:49.119994   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.120001   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:49.120006   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:49.120061   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:49.149253   57716 cri.go:89] found id: ""
	I1210 05:56:49.149267   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.149275   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:49.149280   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:49.149339   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:49.190394   57716 cri.go:89] found id: ""
	I1210 05:56:49.190407   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.190414   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:49.190419   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:49.190474   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:49.226315   57716 cri.go:89] found id: ""
	I1210 05:56:49.226328   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.226335   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:49.226340   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:49.226398   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:49.253703   57716 cri.go:89] found id: ""
	I1210 05:56:49.253716   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.253723   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:49.253729   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:49.253793   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:49.278595   57716 cri.go:89] found id: ""
	I1210 05:56:49.278609   57716 logs.go:282] 0 containers: []
	W1210 05:56:49.278616   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:49.278633   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:49.278643   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:49.339769   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:49.339786   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:49.368179   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:49.368196   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:49.424135   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:49.424152   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:49.435251   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:49.435277   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:49.499081   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:49.491345   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.492104   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.493573   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.494053   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.495641   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:49.491345   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.492104   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.493573   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.494053   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:49.495641   14379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:52.000764   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:52.011936   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:52.011997   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:52.044999   57716 cri.go:89] found id: ""
	I1210 05:56:52.045013   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.045020   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:52.045026   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:52.045084   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:52.069248   57716 cri.go:89] found id: ""
	I1210 05:56:52.069262   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.069269   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:52.069274   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:52.069340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:52.098397   57716 cri.go:89] found id: ""
	I1210 05:56:52.098410   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.098428   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:52.098435   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:52.098500   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:52.126868   57716 cri.go:89] found id: ""
	I1210 05:56:52.126887   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.126905   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:52.126910   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:52.126965   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:52.150645   57716 cri.go:89] found id: ""
	I1210 05:56:52.150658   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.150666   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:52.150681   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:52.150740   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:52.186283   57716 cri.go:89] found id: ""
	I1210 05:56:52.186296   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.186304   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:52.186318   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:52.186374   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:52.218438   57716 cri.go:89] found id: ""
	I1210 05:56:52.218451   57716 logs.go:282] 0 containers: []
	W1210 05:56:52.218458   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:52.218476   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:52.218486   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:52.281011   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:52.273152   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.273845   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.275592   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.276072   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.277623   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:52.273152   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.273845   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.275592   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.276072   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:52.277623   14467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:52.281021   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:52.281032   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:52.342042   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:52.342058   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:52.373121   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:52.373136   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:52.428970   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:52.428987   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:54.940399   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:54.950167   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:54.950228   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:54.974172   57716 cri.go:89] found id: ""
	I1210 05:56:54.974186   57716 logs.go:282] 0 containers: []
	W1210 05:56:54.974193   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:54.974199   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:54.974257   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:55.008246   57716 cri.go:89] found id: ""
	I1210 05:56:55.008262   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.008270   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:55.008275   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:55.008340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:55.034655   57716 cri.go:89] found id: ""
	I1210 05:56:55.034669   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.034676   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:55.034682   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:55.034741   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:55.063972   57716 cri.go:89] found id: ""
	I1210 05:56:55.063986   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.063994   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:55.063999   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:55.064057   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:55.090263   57716 cri.go:89] found id: ""
	I1210 05:56:55.090275   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.090292   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:55.090298   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:55.090353   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:55.113407   57716 cri.go:89] found id: ""
	I1210 05:56:55.113421   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.113428   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:55.113433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:55.113491   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:55.140991   57716 cri.go:89] found id: ""
	I1210 05:56:55.141010   57716 logs.go:282] 0 containers: []
	W1210 05:56:55.141018   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:55.141025   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:55.141036   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:55.201731   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:55.201749   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:55.218256   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:55.218270   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:55.290800   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:55.282984   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.283573   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285214   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285730   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.287308   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:55.282984   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.283573   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285214   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.285730   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:55.287308   14579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:56:55.290811   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:55.290831   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:55.355200   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:55.355218   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:57.881741   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:56:57.891584   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:56:57.891646   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:56:57.918310   57716 cri.go:89] found id: ""
	I1210 05:56:57.918323   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.918330   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:56:57.918336   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:56:57.918391   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:56:57.942318   57716 cri.go:89] found id: ""
	I1210 05:56:57.942331   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.942338   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:56:57.942344   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:56:57.942402   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:56:57.966253   57716 cri.go:89] found id: ""
	I1210 05:56:57.966267   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.966274   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:56:57.966279   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:56:57.966338   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:56:57.990324   57716 cri.go:89] found id: ""
	I1210 05:56:57.990338   57716 logs.go:282] 0 containers: []
	W1210 05:56:57.990346   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:56:57.990351   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:56:57.990414   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:56:58.021444   57716 cri.go:89] found id: ""
	I1210 05:56:58.021458   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.021466   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:56:58.021471   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:56:58.021529   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:56:58.046661   57716 cri.go:89] found id: ""
	I1210 05:56:58.046680   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.046688   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:56:58.046699   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:56:58.046767   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:56:58.071123   57716 cri.go:89] found id: ""
	I1210 05:56:58.071137   57716 logs.go:282] 0 containers: []
	W1210 05:56:58.071145   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:56:58.071153   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:56:58.071162   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:56:58.135978   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:56:58.135998   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:56:58.167638   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:56:58.167656   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:56:58.232589   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:56:58.232610   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:56:58.244347   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:56:58.244363   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:56:58.304989   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:56:58.297197   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.297898   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.299609   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.300132   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.301733   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:56:58.297197   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.297898   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.299609   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.300132   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:56:58.301733   14696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:00.806679   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:00.816733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:00.816793   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:00.845594   57716 cri.go:89] found id: ""
	I1210 05:57:00.845608   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.845615   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:00.845622   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:00.845682   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:00.880377   57716 cri.go:89] found id: ""
	I1210 05:57:00.880391   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.880399   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:00.880405   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:00.880463   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:00.904970   57716 cri.go:89] found id: ""
	I1210 05:57:00.904990   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.904997   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:00.905003   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:00.905063   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:00.933169   57716 cri.go:89] found id: ""
	I1210 05:57:00.933183   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.933191   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:00.933196   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:00.933255   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:00.962218   57716 cri.go:89] found id: ""
	I1210 05:57:00.962231   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.962238   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:00.962244   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:00.962301   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:00.987794   57716 cri.go:89] found id: ""
	I1210 05:57:00.987807   57716 logs.go:282] 0 containers: []
	W1210 05:57:00.987814   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:00.987820   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:00.987879   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:01.014287   57716 cri.go:89] found id: ""
	I1210 05:57:01.014302   57716 logs.go:282] 0 containers: []
	W1210 05:57:01.014309   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:01.014318   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:01.014328   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:01.045925   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:01.045941   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:01.102696   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:01.102714   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:01.114077   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:01.114092   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:01.201703   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:01.177406   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.182687   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.186518   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.195186   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.196003   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:01.177406   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.182687   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.186518   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.195186   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:01.196003   14793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:01.201726   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:01.201738   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:03.774227   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:03.784265   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:03.784325   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:03.809259   57716 cri.go:89] found id: ""
	I1210 05:57:03.809273   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.809280   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:03.809285   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:03.809347   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:03.835314   57716 cri.go:89] found id: ""
	I1210 05:57:03.835329   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.835336   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:03.835342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:03.835401   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:03.860149   57716 cri.go:89] found id: ""
	I1210 05:57:03.860163   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.860170   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:03.860175   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:03.860243   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:03.886583   57716 cri.go:89] found id: ""
	I1210 05:57:03.886597   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.886604   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:03.886610   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:03.886669   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:03.915441   57716 cri.go:89] found id: ""
	I1210 05:57:03.915454   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.915462   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:03.915467   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:03.915528   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:03.939994   57716 cri.go:89] found id: ""
	I1210 05:57:03.940008   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.940015   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:03.940021   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:03.944397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:03.970729   57716 cri.go:89] found id: ""
	I1210 05:57:03.970742   57716 logs.go:282] 0 containers: []
	W1210 05:57:03.970749   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:03.970757   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:03.970768   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:04.027596   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:04.027617   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:04.039557   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:04.039578   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:04.105314   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:04.097441   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.098313   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.099991   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.100340   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.101876   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:04.097441   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.098313   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.099991   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.100340   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:04.101876   14886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:04.105325   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:04.105336   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:04.167908   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:04.167927   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:06.703048   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:06.712953   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:06.713014   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:06.740745   57716 cri.go:89] found id: ""
	I1210 05:57:06.740759   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.740766   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:06.740771   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:06.740826   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:06.764572   57716 cri.go:89] found id: ""
	I1210 05:57:06.764585   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.764592   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:06.764598   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:06.764654   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:06.792403   57716 cri.go:89] found id: ""
	I1210 05:57:06.792418   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.792425   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:06.792430   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:06.792488   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:06.816569   57716 cri.go:89] found id: ""
	I1210 05:57:06.816583   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.816591   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:06.816596   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:06.816659   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:06.841104   57716 cri.go:89] found id: ""
	I1210 05:57:06.841118   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.841125   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:06.841131   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:06.841191   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:06.863923   57716 cri.go:89] found id: ""
	I1210 05:57:06.863936   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.863943   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:06.863949   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:06.864004   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:06.889078   57716 cri.go:89] found id: ""
	I1210 05:57:06.889091   57716 logs.go:282] 0 containers: []
	W1210 05:57:06.889099   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:06.889106   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:06.889116   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:06.943842   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:06.943863   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:06.954461   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:06.954477   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:07.025823   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:07.017208   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.017859   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.019473   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.020044   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.021782   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:07.017208   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.017859   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.019473   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.020044   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:07.021782   14990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:07.025833   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:07.025847   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:07.087136   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:07.087156   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:09.618129   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:09.627876   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:09.627939   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:09.655385   57716 cri.go:89] found id: ""
	I1210 05:57:09.655399   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.655406   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:09.655411   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:09.655476   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:09.678439   57716 cri.go:89] found id: ""
	I1210 05:57:09.678453   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.678460   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:09.678466   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:09.678521   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:09.708049   57716 cri.go:89] found id: ""
	I1210 05:57:09.708063   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.708071   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:09.708076   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:09.708134   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:09.731272   57716 cri.go:89] found id: ""
	I1210 05:57:09.731286   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.731293   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:09.731298   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:09.731355   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:09.756542   57716 cri.go:89] found id: ""
	I1210 05:57:09.756556   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.756563   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:09.756569   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:09.756625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:09.782376   57716 cri.go:89] found id: ""
	I1210 05:57:09.782389   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.782396   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:09.782402   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:09.782469   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:09.806766   57716 cri.go:89] found id: ""
	I1210 05:57:09.806780   57716 logs.go:282] 0 containers: []
	W1210 05:57:09.806787   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:09.806795   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:09.806806   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:09.817591   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:09.817607   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:09.877883   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:09.869907   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.870472   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872036   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872545   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.874081   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:09.869907   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.870472   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872036   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.872545   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:09.874081   15092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:09.877897   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:09.877907   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:09.939799   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:09.939817   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:09.972539   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:09.972555   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:12.528080   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:12.538052   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:12.538112   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:12.561407   57716 cri.go:89] found id: ""
	I1210 05:57:12.561421   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.561429   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:12.561434   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:12.561504   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:12.587323   57716 cri.go:89] found id: ""
	I1210 05:57:12.587337   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.587344   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:12.587349   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:12.587407   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:12.611528   57716 cri.go:89] found id: ""
	I1210 05:57:12.611542   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.611550   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:12.611555   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:12.611613   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:12.639252   57716 cri.go:89] found id: ""
	I1210 05:57:12.639266   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.639273   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:12.639278   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:12.639340   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:12.662845   57716 cri.go:89] found id: ""
	I1210 05:57:12.662858   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.662865   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:12.662871   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:12.662924   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:12.687312   57716 cri.go:89] found id: ""
	I1210 05:57:12.687325   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.687332   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:12.687338   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:12.687410   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:12.712443   57716 cri.go:89] found id: ""
	I1210 05:57:12.712456   57716 logs.go:282] 0 containers: []
	W1210 05:57:12.712463   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:12.712471   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:12.712484   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:12.772312   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:12.772330   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:12.800589   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:12.800611   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:12.856815   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:12.856832   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:12.868411   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:12.868427   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:12.938613   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:12.928160   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.928723   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.932890   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.933536   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.935292   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:12.928160   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.928723   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.932890   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.933536   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:12.935292   15213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:15.439137   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:15.449933   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:15.450005   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:15.483755   57716 cri.go:89] found id: ""
	I1210 05:57:15.483769   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.483775   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:15.483781   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:15.483837   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:15.507520   57716 cri.go:89] found id: ""
	I1210 05:57:15.507534   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.507542   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:15.507547   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:15.507605   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:15.534553   57716 cri.go:89] found id: ""
	I1210 05:57:15.534566   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.534573   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:15.534578   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:15.534635   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:15.559360   57716 cri.go:89] found id: ""
	I1210 05:57:15.559374   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.559381   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:15.559386   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:15.559443   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:15.584591   57716 cri.go:89] found id: ""
	I1210 05:57:15.584607   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.584614   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:15.584619   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:15.584677   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:15.613451   57716 cri.go:89] found id: ""
	I1210 05:57:15.613471   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.613479   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:15.613485   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:15.613607   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:15.638843   57716 cri.go:89] found id: ""
	I1210 05:57:15.638858   57716 logs.go:282] 0 containers: []
	W1210 05:57:15.638865   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:15.638874   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:15.638884   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:15.694185   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:15.694203   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:15.704709   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:15.704725   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:15.769534   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:15.761459   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.762286   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.763956   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.764609   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.766176   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:15.761459   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.762286   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.763956   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.764609   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.766176   15306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:15.769543   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:15.769556   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:15.830240   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:15.830258   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:18.356935   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:18.366837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:18.366896   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:18.391280   57716 cri.go:89] found id: ""
	I1210 05:57:18.391294   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.391301   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:18.391308   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:18.391376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:18.421532   57716 cri.go:89] found id: ""
	I1210 05:57:18.421546   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.421553   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:18.421558   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:18.421625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:18.455057   57716 cri.go:89] found id: ""
	I1210 05:57:18.455071   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.455078   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:18.455083   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:18.455153   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:18.488121   57716 cri.go:89] found id: ""
	I1210 05:57:18.488135   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.488142   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:18.488148   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:18.488210   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:18.511864   57716 cri.go:89] found id: ""
	I1210 05:57:18.511878   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.511886   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:18.511905   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:18.511966   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:18.535922   57716 cri.go:89] found id: ""
	I1210 05:57:18.535936   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.535957   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:18.535963   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:18.536029   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:18.560287   57716 cri.go:89] found id: ""
	I1210 05:57:18.560302   57716 logs.go:282] 0 containers: []
	W1210 05:57:18.560309   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:18.560317   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:18.560328   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:18.627753   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:18.619860   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.620508   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622357   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622888   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.624346   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:18.619860   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.620508   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622357   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.622888   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:18.624346   15405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:18.627764   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:18.627776   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:18.688471   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:18.688489   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:18.719143   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:18.719159   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:18.774435   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:18.774453   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:21.285722   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:21.295523   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:21.295582   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:21.322675   57716 cri.go:89] found id: ""
	I1210 05:57:21.322688   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.322696   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:21.322701   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:21.322758   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:21.347136   57716 cri.go:89] found id: ""
	I1210 05:57:21.347150   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.347157   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:21.347162   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:21.347219   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:21.372204   57716 cri.go:89] found id: ""
	I1210 05:57:21.372217   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.372224   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:21.372229   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:21.372283   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:21.395417   57716 cri.go:89] found id: ""
	I1210 05:57:21.395431   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.395438   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:21.395443   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:21.395515   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:21.440154   57716 cri.go:89] found id: ""
	I1210 05:57:21.440167   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.440174   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:21.440179   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:21.440240   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:21.473140   57716 cri.go:89] found id: ""
	I1210 05:57:21.473154   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.473166   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:21.473172   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:21.473227   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:21.501607   57716 cri.go:89] found id: ""
	I1210 05:57:21.501630   57716 logs.go:282] 0 containers: []
	W1210 05:57:21.501638   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:21.501646   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:21.501657   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:21.534381   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:21.534397   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:21.591435   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:21.591454   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:21.602570   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:21.602586   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:21.665543   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:21.656612   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.657173   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659237   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659598   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.661250   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:21.656612   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.657173   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659237   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.659598   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:21.661250   15525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:21.665553   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:21.665564   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:24.232360   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:24.242545   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:24.242605   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:24.268962   57716 cri.go:89] found id: ""
	I1210 05:57:24.268976   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.268983   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:24.268989   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:24.269051   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:24.293625   57716 cri.go:89] found id: ""
	I1210 05:57:24.293638   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.293645   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:24.293650   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:24.293706   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:24.323101   57716 cri.go:89] found id: ""
	I1210 05:57:24.323115   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.323122   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:24.323127   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:24.323184   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:24.352417   57716 cri.go:89] found id: ""
	I1210 05:57:24.352431   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.352442   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:24.352448   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:24.352506   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:24.377825   57716 cri.go:89] found id: ""
	I1210 05:57:24.377839   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.377846   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:24.377851   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:24.377907   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:24.401476   57716 cri.go:89] found id: ""
	I1210 05:57:24.401490   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.401497   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:24.401502   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:24.401560   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:24.430784   57716 cri.go:89] found id: ""
	I1210 05:57:24.430798   57716 logs.go:282] 0 containers: []
	W1210 05:57:24.430805   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:24.430813   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:24.430826   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:24.496086   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:24.496105   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:24.508163   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:24.508178   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:24.572343   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:24.563972   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.564594   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.566433   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.567204   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.568973   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:24.563972   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.564594   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.566433   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.567204   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:24.568973   15618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:24.572354   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:24.572365   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:24.634266   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:24.634284   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:27.162032   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:27.171692   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:27.171751   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:27.195293   57716 cri.go:89] found id: ""
	I1210 05:57:27.195306   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.195313   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:27.195319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:27.195375   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:27.223719   57716 cri.go:89] found id: ""
	I1210 05:57:27.223733   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.223741   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:27.223746   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:27.223805   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:27.249635   57716 cri.go:89] found id: ""
	I1210 05:57:27.249648   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.249655   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:27.249661   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:27.249718   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:27.274420   57716 cri.go:89] found id: ""
	I1210 05:57:27.274434   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.274443   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:27.274448   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:27.274515   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:27.302747   57716 cri.go:89] found id: ""
	I1210 05:57:27.302760   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.302777   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:27.302782   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:27.302842   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:27.327624   57716 cri.go:89] found id: ""
	I1210 05:57:27.327638   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.327645   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:27.327650   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:27.327710   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:27.351138   57716 cri.go:89] found id: ""
	I1210 05:57:27.351152   57716 logs.go:282] 0 containers: []
	W1210 05:57:27.351159   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:27.351168   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:27.351179   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:27.416428   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:27.416448   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:27.458729   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:27.458746   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:27.517941   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:27.517959   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:27.528443   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:27.528459   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:27.592381   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:27.584705   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.585249   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.586673   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.587168   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.588572   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:27.584705   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.585249   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.586673   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.587168   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:27.588572   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:30.094042   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:30.104609   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:30.104685   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:30.131255   57716 cri.go:89] found id: ""
	I1210 05:57:30.131270   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.131277   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:30.131283   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:30.131348   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:30.160477   57716 cri.go:89] found id: ""
	I1210 05:57:30.160491   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.160498   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:30.160503   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:30.160562   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:30.186824   57716 cri.go:89] found id: ""
	I1210 05:57:30.186837   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.186845   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:30.186850   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:30.186910   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:30.212870   57716 cri.go:89] found id: ""
	I1210 05:57:30.212885   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.212892   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:30.212899   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:30.212957   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:30.238085   57716 cri.go:89] found id: ""
	I1210 05:57:30.238098   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.238105   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:30.238111   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:30.238169   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:30.264614   57716 cri.go:89] found id: ""
	I1210 05:57:30.264628   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.264635   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:30.264641   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:30.264697   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:30.292801   57716 cri.go:89] found id: ""
	I1210 05:57:30.292816   57716 logs.go:282] 0 containers: []
	W1210 05:57:30.292823   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:30.292831   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:30.292841   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:30.324527   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:30.324543   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:30.382130   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:30.382156   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:30.392903   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:30.392921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:30.479224   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:30.470442   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.471725   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.473752   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.474178   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.475815   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:30.470442   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.471725   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.473752   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.474178   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:30.475815   15833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:30.479235   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:30.479257   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:33.043979   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:33.054086   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:33.054144   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:33.079719   57716 cri.go:89] found id: ""
	I1210 05:57:33.079733   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.079740   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:33.079746   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:33.079804   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:33.109000   57716 cri.go:89] found id: ""
	I1210 05:57:33.109013   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.109020   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:33.109026   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:33.109083   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:33.134184   57716 cri.go:89] found id: ""
	I1210 05:57:33.134198   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.134206   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:33.134213   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:33.134275   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:33.158142   57716 cri.go:89] found id: ""
	I1210 05:57:33.158155   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.158162   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:33.158168   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:33.158253   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:33.181293   57716 cri.go:89] found id: ""
	I1210 05:57:33.181306   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.181313   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:33.181319   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:33.181376   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:33.206025   57716 cri.go:89] found id: ""
	I1210 05:57:33.206040   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.206047   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:33.206052   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:33.206149   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:33.230253   57716 cri.go:89] found id: ""
	I1210 05:57:33.230267   57716 logs.go:282] 0 containers: []
	W1210 05:57:33.230275   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:33.230283   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:33.230293   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:33.292011   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:33.292028   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:33.318004   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:33.318019   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:33.377256   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:33.377273   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:33.387928   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:33.387943   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:33.461753   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:33.453954   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.454800   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456253   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456768   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.458350   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:33.453954   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.454800   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456253   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.456768   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:33.458350   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:35.962013   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:35.972548   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:35.972622   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:36.000855   57716 cri.go:89] found id: ""
	I1210 05:57:36.000870   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.000880   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:36.000900   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:36.000977   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:36.029136   57716 cri.go:89] found id: ""
	I1210 05:57:36.029151   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.029158   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:36.029164   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:36.029228   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:36.054512   57716 cri.go:89] found id: ""
	I1210 05:57:36.054525   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.054533   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:36.054538   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:36.054597   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:36.080508   57716 cri.go:89] found id: ""
	I1210 05:57:36.080522   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.080529   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:36.080535   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:36.080594   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:36.108590   57716 cri.go:89] found id: ""
	I1210 05:57:36.108604   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.108611   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:36.108616   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:36.108684   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:36.137690   57716 cri.go:89] found id: ""
	I1210 05:57:36.137704   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.137711   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:36.137716   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:36.137777   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:36.164307   57716 cri.go:89] found id: ""
	I1210 05:57:36.164321   57716 logs.go:282] 0 containers: []
	W1210 05:57:36.164328   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:36.164335   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:36.164345   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:36.219816   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:36.219833   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:36.231171   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:36.231187   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:36.294059   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:36.285785   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.286547   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288109   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288462   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.290084   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:36.285785   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.286547   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288109   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.288462   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:36.290084   16034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:36.294068   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:36.294078   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:36.358593   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:36.358612   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:38.888296   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:38.898447   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:38.898505   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:38.925123   57716 cri.go:89] found id: ""
	I1210 05:57:38.925137   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.925144   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:38.925150   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:38.925210   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:38.949713   57716 cri.go:89] found id: ""
	I1210 05:57:38.949727   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.949734   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:38.949739   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:38.949797   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:38.974867   57716 cri.go:89] found id: ""
	I1210 05:57:38.974881   57716 logs.go:282] 0 containers: []
	W1210 05:57:38.974888   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:38.974893   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:38.974949   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:39.008214   57716 cri.go:89] found id: ""
	I1210 05:57:39.008228   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.008235   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:39.008240   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:39.008300   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:39.033316   57716 cri.go:89] found id: ""
	I1210 05:57:39.033330   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.033342   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:39.033347   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:39.033405   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:39.057634   57716 cri.go:89] found id: ""
	I1210 05:57:39.057648   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.057655   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:39.057660   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:39.057719   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:39.082101   57716 cri.go:89] found id: ""
	I1210 05:57:39.082115   57716 logs.go:282] 0 containers: []
	W1210 05:57:39.082125   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:39.082133   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:39.082143   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:39.144897   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:39.137033   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.137582   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139164   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139565   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.141172   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:39.137033   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.137582   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139164   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.139565   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:39.141172   16136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:39.144907   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:39.144920   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:39.209520   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:39.209538   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:39.239106   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:39.239121   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:39.294711   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:39.294728   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:41.805411   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:41.814952   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:41.815027   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:41.838919   57716 cri.go:89] found id: ""
	I1210 05:57:41.838933   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.838940   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:41.838946   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:41.839004   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:41.865368   57716 cri.go:89] found id: ""
	I1210 05:57:41.865382   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.865389   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:41.865394   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:41.865452   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:41.889411   57716 cri.go:89] found id: ""
	I1210 05:57:41.889424   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.889431   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:41.889436   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:41.889521   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:41.915079   57716 cri.go:89] found id: ""
	I1210 05:57:41.915093   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.915101   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:41.915110   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:41.915173   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:41.940274   57716 cri.go:89] found id: ""
	I1210 05:57:41.940288   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.940295   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:41.940301   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:41.940360   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:41.969301   57716 cri.go:89] found id: ""
	I1210 05:57:41.969314   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.969321   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:41.969329   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:41.969387   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:41.993086   57716 cri.go:89] found id: ""
	I1210 05:57:41.993100   57716 logs.go:282] 0 containers: []
	W1210 05:57:41.993108   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:41.993116   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:41.993127   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:42.006335   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:42.006357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:42.077276   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:42.067659   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.069125   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.070001   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071203   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071880   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:42.067659   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.069125   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.070001   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071203   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:42.071880   16244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:42.077290   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:42.077302   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:42.143212   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:42.143248   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:42.179140   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:42.179158   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:44.752413   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:44.762150   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:44.762207   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:44.791897   57716 cri.go:89] found id: ""
	I1210 05:57:44.791911   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.791918   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:44.791924   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:44.791983   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:44.815813   57716 cri.go:89] found id: ""
	I1210 05:57:44.815827   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.815834   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:44.815839   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:44.815894   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:44.839318   57716 cri.go:89] found id: ""
	I1210 05:57:44.839331   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.839337   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:44.839342   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:44.839399   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:44.866822   57716 cri.go:89] found id: ""
	I1210 05:57:44.866835   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.866842   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:44.866848   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:44.866904   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:44.892455   57716 cri.go:89] found id: ""
	I1210 05:57:44.892469   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.892476   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:44.892481   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:44.892536   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:44.920574   57716 cri.go:89] found id: ""
	I1210 05:57:44.920588   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.920596   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:44.920602   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:44.920663   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:44.947951   57716 cri.go:89] found id: ""
	I1210 05:57:44.947965   57716 logs.go:282] 0 containers: []
	W1210 05:57:44.947971   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:44.947979   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:44.947988   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:45.005480   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:45.005501   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:45.022560   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:45.022578   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:45.142523   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:45.129527   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.130054   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.132621   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.134289   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.135580   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:45.129527   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.130054   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.132621   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.134289   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:45.135580   16351 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:45.142534   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:45.142550   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:45.216088   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:45.216135   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:47.759715   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:47.769555   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:47.769615   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:47.793943   57716 cri.go:89] found id: ""
	I1210 05:57:47.793957   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.793964   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:47.793969   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:47.794039   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:47.818334   57716 cri.go:89] found id: ""
	I1210 05:57:47.818348   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.818355   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:47.818360   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:47.818417   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:47.842582   57716 cri.go:89] found id: ""
	I1210 05:57:47.842599   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.842617   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:47.842623   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:47.842689   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:47.868471   57716 cri.go:89] found id: ""
	I1210 05:57:47.868485   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.868492   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:47.868498   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:47.868559   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:47.897381   57716 cri.go:89] found id: ""
	I1210 05:57:47.897394   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.897401   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:47.897416   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:47.897473   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:47.920386   57716 cri.go:89] found id: ""
	I1210 05:57:47.920400   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.920407   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:47.920412   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:47.920474   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:47.947866   57716 cri.go:89] found id: ""
	I1210 05:57:47.947879   57716 logs.go:282] 0 containers: []
	W1210 05:57:47.947886   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:47.947894   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:47.947904   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:48.008844   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:48.008863   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:48.038885   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:48.038903   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:48.095592   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:48.095610   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:48.107140   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:48.107155   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:48.171340   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:48.162734   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.163476   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165210   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165663   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.167242   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:48.162734   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.163476   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165210   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.165663   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:48.167242   16468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:50.672091   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:50.683391   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:50.683451   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:50.711296   57716 cri.go:89] found id: ""
	I1210 05:57:50.711311   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.711319   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:50.711327   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:50.711382   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:50.740763   57716 cri.go:89] found id: ""
	I1210 05:57:50.740777   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.740785   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:50.740790   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:50.740853   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:50.772079   57716 cri.go:89] found id: ""
	I1210 05:57:50.772093   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.772111   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:50.772117   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:50.772184   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:50.800962   57716 cri.go:89] found id: ""
	I1210 05:57:50.800975   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.800982   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:50.800988   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:50.801044   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:50.825974   57716 cri.go:89] found id: ""
	I1210 05:57:50.825993   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.826000   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:50.826005   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:50.826061   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:50.854343   57716 cri.go:89] found id: ""
	I1210 05:57:50.854356   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.854364   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:50.854369   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:50.854426   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:50.878560   57716 cri.go:89] found id: ""
	I1210 05:57:50.878573   57716 logs.go:282] 0 containers: []
	W1210 05:57:50.878581   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:50.878599   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:50.878609   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:50.906006   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:50.906022   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:50.961851   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:50.961869   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:50.973152   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:50.973171   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:51.044678   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:51.036912   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.037431   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039082   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039573   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.041151   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:51.036912   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.037431   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039082   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.039573   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:51.041151   16570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:51.044689   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:51.044699   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:53.606481   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:53.616567   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:53.616625   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:53.641012   57716 cri.go:89] found id: ""
	I1210 05:57:53.641025   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.641031   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:53.641037   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:53.641092   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:53.673275   57716 cri.go:89] found id: ""
	I1210 05:57:53.673290   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.673307   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:53.673313   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:53.673369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:53.709276   57716 cri.go:89] found id: ""
	I1210 05:57:53.709291   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.709298   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:53.709302   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:53.709369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:53.739332   57716 cri.go:89] found id: ""
	I1210 05:57:53.739346   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.739353   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:53.739358   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:53.739415   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:53.764637   57716 cri.go:89] found id: ""
	I1210 05:57:53.764650   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.764657   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:53.764662   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:53.764717   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:53.793424   57716 cri.go:89] found id: ""
	I1210 05:57:53.793438   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.793446   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:53.793451   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:53.793514   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:53.823828   57716 cri.go:89] found id: ""
	I1210 05:57:53.823842   57716 logs.go:282] 0 containers: []
	W1210 05:57:53.823849   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:53.823857   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:53.823868   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:53.834565   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:53.834583   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:53.898035   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:53.890056   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.890844   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.892495   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.893000   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.894620   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:53.890056   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.890844   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.892495   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.893000   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:53.894620   16659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:53.898052   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:53.898063   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:53.960027   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:53.960044   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:53.988584   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:53.988600   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:56.551892   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:56.562044   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:56.562109   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:56.587872   57716 cri.go:89] found id: ""
	I1210 05:57:56.587889   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.587897   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:56.587902   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:56.587967   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:56.613907   57716 cri.go:89] found id: ""
	I1210 05:57:56.613920   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.613927   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:56.613932   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:56.613988   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:56.638685   57716 cri.go:89] found id: ""
	I1210 05:57:56.638699   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.638706   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:56.638711   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:56.638768   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:56.665211   57716 cri.go:89] found id: ""
	I1210 05:57:56.665225   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.665232   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:56.665237   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:56.665295   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:56.696149   57716 cri.go:89] found id: ""
	I1210 05:57:56.696163   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.696169   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:56.696174   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:56.696231   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:56.728016   57716 cri.go:89] found id: ""
	I1210 05:57:56.728029   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.728036   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:56.728042   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:56.728104   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:56.752871   57716 cri.go:89] found id: ""
	I1210 05:57:56.752886   57716 logs.go:282] 0 containers: []
	W1210 05:57:56.752894   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:56.752901   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:56.752913   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:56.783267   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:56.783283   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:57:56.842023   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:56.842046   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:56.853533   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:56.853549   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:56.914976   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:56.907206   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.907987   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909541   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909854   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.911455   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:56.907206   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.907987   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909541   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.909854   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:56.911455   16776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:56.914988   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:56.915000   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:59.477082   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:57:59.487185   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:59.487242   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:59.511535   57716 cri.go:89] found id: ""
	I1210 05:57:59.511549   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.511556   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:59.511562   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:57:59.511639   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:59.536235   57716 cri.go:89] found id: ""
	I1210 05:57:59.536249   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.536265   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:57:59.536271   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:57:59.536329   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:59.560801   57716 cri.go:89] found id: ""
	I1210 05:57:59.560815   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.560821   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:57:59.560827   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:59.560890   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:59.586232   57716 cri.go:89] found id: ""
	I1210 05:57:59.586247   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.586273   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:59.586279   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:59.586343   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:59.610087   57716 cri.go:89] found id: ""
	I1210 05:57:59.610101   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.610108   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:59.610113   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:59.610170   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:59.634249   57716 cri.go:89] found id: ""
	I1210 05:57:59.634263   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.634270   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:59.634275   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:59.634333   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:59.659066   57716 cri.go:89] found id: ""
	I1210 05:57:59.659100   57716 logs.go:282] 0 containers: []
	W1210 05:57:59.659106   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:59.659115   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:59.659125   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:59.670606   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:59.670622   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:59.744825   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:59.737528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.737938   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739905   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.741403   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:59.737528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.737938   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739528   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.739905   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:59.741403   16864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:59.744835   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:57:59.744847   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:57:59.806075   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:57:59.806092   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:59.841753   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:59.841769   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:02.400095   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:02.410925   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:02.410999   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:02.435337   57716 cri.go:89] found id: ""
	I1210 05:58:02.435351   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.435358   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:02.435363   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:02.435421   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:02.459273   57716 cri.go:89] found id: ""
	I1210 05:58:02.459287   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.459294   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:02.459299   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:02.459369   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:02.484838   57716 cri.go:89] found id: ""
	I1210 05:58:02.484859   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.484867   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:02.484872   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:02.484930   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:02.513703   57716 cri.go:89] found id: ""
	I1210 05:58:02.513718   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.513732   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:02.513738   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:02.513799   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:02.537442   57716 cri.go:89] found id: ""
	I1210 05:58:02.537456   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.537472   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:02.537478   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:02.537538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:02.562811   57716 cri.go:89] found id: ""
	I1210 05:58:02.562824   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.562831   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:02.562837   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:02.562904   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:02.593233   57716 cri.go:89] found id: ""
	I1210 05:58:02.593247   57716 logs.go:282] 0 containers: []
	W1210 05:58:02.593254   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:02.593263   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:02.593283   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:02.649484   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:02.649502   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:02.668256   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:02.668270   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:02.746961   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:02.738936   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.739525   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741090   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741618   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.743247   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:02.738936   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.739525   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741090   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.741618   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:02.743247   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:02.746984   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:02.746995   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:02.810434   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:02.810451   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:05.338812   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:05.348929   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:05.349015   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:05.376460   57716 cri.go:89] found id: ""
	I1210 05:58:05.376474   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.376481   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:05.376486   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:05.376545   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:05.401572   57716 cri.go:89] found id: ""
	I1210 05:58:05.401585   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.401593   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:05.401598   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:05.401657   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:05.426804   57716 cri.go:89] found id: ""
	I1210 05:58:05.426820   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.426827   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:05.426832   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:05.426889   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:05.450557   57716 cri.go:89] found id: ""
	I1210 05:58:05.450570   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.450577   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:05.450583   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:05.450640   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:05.476587   57716 cri.go:89] found id: ""
	I1210 05:58:05.476601   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.476607   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:05.476612   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:05.476669   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:05.501716   57716 cri.go:89] found id: ""
	I1210 05:58:05.501730   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.501736   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:05.501742   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:05.501801   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:05.526971   57716 cri.go:89] found id: ""
	I1210 05:58:05.526985   57716 logs.go:282] 0 containers: []
	W1210 05:58:05.526992   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:05.527000   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:05.527050   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:05.585508   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:05.585527   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:05.596526   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:05.596542   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:05.661377   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:05.650856   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.651411   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653229   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653833   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.655530   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:05.650856   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.651411   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653229   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.653833   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:05.655530   17068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:05.661388   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:05.661398   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:05.732863   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:05.732882   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:08.260047   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:08.270586   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:08.270648   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:08.298955   57716 cri.go:89] found id: ""
	I1210 05:58:08.298984   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.298992   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:08.298997   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:08.299088   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:08.326321   57716 cri.go:89] found id: ""
	I1210 05:58:08.326335   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.326342   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:08.326347   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:08.326410   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:08.350063   57716 cri.go:89] found id: ""
	I1210 05:58:08.350077   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.350095   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:08.350100   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:08.350157   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:08.374459   57716 cri.go:89] found id: ""
	I1210 05:58:08.374472   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.374480   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:08.374485   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:08.374549   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:08.398594   57716 cri.go:89] found id: ""
	I1210 05:58:08.398608   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.398615   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:08.398629   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:08.398685   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:08.423334   57716 cri.go:89] found id: ""
	I1210 05:58:08.423348   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.423355   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:08.423366   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:08.423424   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:08.448137   57716 cri.go:89] found id: ""
	I1210 05:58:08.448150   57716 logs.go:282] 0 containers: []
	W1210 05:58:08.448157   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:08.448164   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:08.448175   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:08.510732   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:08.502942   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.503736   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505412   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505743   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.507339   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:08.502942   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.503736   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505412   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.505743   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:08.507339   17170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:08.510751   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:08.510764   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:08.572194   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:08.572211   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:08.600446   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:08.600463   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:08.657452   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:08.657469   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:11.170762   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:11.180886   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:11.180951   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:11.205555   57716 cri.go:89] found id: ""
	I1210 05:58:11.205569   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.205584   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:11.205590   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:11.205664   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:11.233080   57716 cri.go:89] found id: ""
	I1210 05:58:11.233094   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.233101   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:11.233106   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:11.233164   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:11.257793   57716 cri.go:89] found id: ""
	I1210 05:58:11.257807   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.257814   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:11.257821   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:11.257879   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:11.282030   57716 cri.go:89] found id: ""
	I1210 05:58:11.282042   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.282050   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:11.282055   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:11.282119   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:11.305111   57716 cri.go:89] found id: ""
	I1210 05:58:11.305125   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.305132   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:11.305138   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:11.305196   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:11.329236   57716 cri.go:89] found id: ""
	I1210 05:58:11.329250   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.329257   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:11.329264   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:11.329320   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:11.354605   57716 cri.go:89] found id: ""
	I1210 05:58:11.354620   57716 logs.go:282] 0 containers: []
	W1210 05:58:11.354627   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:11.354635   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:11.354645   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:11.386130   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:11.386146   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:11.444254   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:11.444272   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:11.455429   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:11.455446   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:11.522092   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:11.513675   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.514592   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516162   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516767   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.518501   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:11.513675   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.514592   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516162   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.516767   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:11.518501   17297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:11.522102   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:11.522112   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:14.084603   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:14.094719   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:14.094779   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:14.118507   57716 cri.go:89] found id: ""
	I1210 05:58:14.118520   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.118528   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:14.118533   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:14.118588   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:14.144079   57716 cri.go:89] found id: ""
	I1210 05:58:14.144093   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.144100   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:14.144105   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:14.144166   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:14.174736   57716 cri.go:89] found id: ""
	I1210 05:58:14.174750   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.174757   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:14.174762   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:14.174837   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:14.199688   57716 cri.go:89] found id: ""
	I1210 05:58:14.199709   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.199727   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:14.199733   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:14.199789   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:14.227765   57716 cri.go:89] found id: ""
	I1210 05:58:14.227779   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.227786   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:14.227793   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:14.227853   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:14.256531   57716 cri.go:89] found id: ""
	I1210 05:58:14.256546   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.256554   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:14.256559   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:14.256628   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:14.281035   57716 cri.go:89] found id: ""
	I1210 05:58:14.281054   57716 logs.go:282] 0 containers: []
	W1210 05:58:14.281062   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:14.281070   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:14.281082   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:14.307632   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:14.307647   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:14.363636   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:14.363655   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:14.374356   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:14.374372   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:14.439204   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:14.431102   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.431970   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.433831   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.434179   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.435681   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:14.431102   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.431970   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.433831   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.434179   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:14.435681   17403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:14.439214   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:14.439227   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:17.000609   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:17.011094   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:17.011152   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:17.034914   57716 cri.go:89] found id: ""
	I1210 05:58:17.034928   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.034935   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:17.034940   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:17.034997   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:17.059216   57716 cri.go:89] found id: ""
	I1210 05:58:17.059229   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.059236   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:17.059241   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:17.059297   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:17.084654   57716 cri.go:89] found id: ""
	I1210 05:58:17.084667   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.084674   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:17.084679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:17.084734   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:17.108452   57716 cri.go:89] found id: ""
	I1210 05:58:17.108465   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.108472   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:17.108477   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:17.108538   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:17.131638   57716 cri.go:89] found id: ""
	I1210 05:58:17.131652   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.131660   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:17.131666   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:17.131724   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:17.157073   57716 cri.go:89] found id: ""
	I1210 05:58:17.157086   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.157093   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:17.157099   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:17.157155   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:17.181834   57716 cri.go:89] found id: ""
	I1210 05:58:17.181849   57716 logs.go:282] 0 containers: []
	W1210 05:58:17.181856   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:17.181864   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:17.181874   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:17.237484   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:17.237500   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:17.248803   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:17.248818   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:17.312123   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:17.304256   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.304922   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.306601   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.307186   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.308722   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:17.304256   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.304922   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.306601   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.307186   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:17.308722   17496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:17.312135   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:17.312145   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:17.375552   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:17.375570   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:19.903470   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:19.915506   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:19.915564   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:19.947745   57716 cri.go:89] found id: ""
	I1210 05:58:19.947758   57716 logs.go:282] 0 containers: []
	W1210 05:58:19.947765   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:19.947771   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:19.947832   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:19.980662   57716 cri.go:89] found id: ""
	I1210 05:58:19.980676   57716 logs.go:282] 0 containers: []
	W1210 05:58:19.980683   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:19.980688   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:19.980746   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:20.014764   57716 cri.go:89] found id: ""
	I1210 05:58:20.014787   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.014795   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:20.014801   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:20.014868   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:20.043079   57716 cri.go:89] found id: ""
	I1210 05:58:20.043093   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.043100   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:20.043106   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:20.043168   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:20.071694   57716 cri.go:89] found id: ""
	I1210 05:58:20.071709   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.071717   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:20.071722   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:20.071785   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:20.097931   57716 cri.go:89] found id: ""
	I1210 05:58:20.097945   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.097952   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:20.097958   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:20.098028   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:20.122795   57716 cri.go:89] found id: ""
	I1210 05:58:20.122809   57716 logs.go:282] 0 containers: []
	W1210 05:58:20.122816   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:20.122824   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:20.122835   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:20.133825   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:20.133840   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:20.194901   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:20.186552   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.187444   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189151   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189874   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.191519   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:20.186552   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.187444   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189151   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.189874   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:20.191519   17596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:20.194911   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:20.194921   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:20.256875   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:20.256894   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:20.283841   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:20.283857   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:22.843646   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:22.853725   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:58:22.853782   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:58:22.878310   57716 cri.go:89] found id: ""
	I1210 05:58:22.878325   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.878332   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:58:22.878336   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 05:58:22.878393   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:58:22.902470   57716 cri.go:89] found id: ""
	I1210 05:58:22.902483   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.902490   57716 logs.go:284] No container was found matching "etcd"
	I1210 05:58:22.902495   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 05:58:22.902552   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:58:22.929428   57716 cri.go:89] found id: ""
	I1210 05:58:22.929442   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.929449   57716 logs.go:284] No container was found matching "coredns"
	I1210 05:58:22.929454   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:58:22.929512   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:58:22.962201   57716 cri.go:89] found id: ""
	I1210 05:58:22.962215   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.962222   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:58:22.962227   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:58:22.962286   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:58:22.988315   57716 cri.go:89] found id: ""
	I1210 05:58:22.988329   57716 logs.go:282] 0 containers: []
	W1210 05:58:22.988336   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:58:22.988341   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:58:22.988397   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:58:23.015788   57716 cri.go:89] found id: ""
	I1210 05:58:23.015801   57716 logs.go:282] 0 containers: []
	W1210 05:58:23.015818   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:58:23.015824   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 05:58:23.015895   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:58:23.040476   57716 cri.go:89] found id: ""
	I1210 05:58:23.040490   57716 logs.go:282] 0 containers: []
	W1210 05:58:23.040497   57716 logs.go:284] No container was found matching "kindnet"
	I1210 05:58:23.040505   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 05:58:23.040515   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 05:58:23.097263   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 05:58:23.097281   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:58:23.108339   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:58:23.108357   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:58:23.174372   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:58:23.166022   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.166801   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168382   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168890   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.170644   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:58:23.166022   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.166801   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168382   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.168890   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:58:23.170644   17705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:58:23.174382   57716 logs.go:123] Gathering logs for containerd ...
	I1210 05:58:23.174393   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 05:58:23.238417   57716 logs.go:123] Gathering logs for container status ...
	I1210 05:58:23.238433   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:58:25.767502   57716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:25.777560   57716 kubeadm.go:602] duration metric: took 4m3.698254406s to restartPrimaryControlPlane
	W1210 05:58:25.777622   57716 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 05:58:25.777697   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 05:58:26.181572   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:58:26.194845   57716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:58:26.202430   57716 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:58:26.202489   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:58:26.210414   57716 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:58:26.210423   57716 kubeadm.go:158] found existing configuration files:
	
	I1210 05:58:26.210474   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:58:26.218226   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:58:26.218281   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:58:26.225499   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:58:26.233426   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:58:26.233479   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:58:26.240639   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:58:26.247882   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:58:26.247936   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:58:26.255235   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:58:26.263002   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:58:26.263069   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:58:26.270271   57716 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:58:26.308640   57716 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 05:58:26.308937   57716 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:58:26.373888   57716 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:58:26.373948   57716 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 05:58:26.373980   57716 kubeadm.go:319] OS: Linux
	I1210 05:58:26.374022   57716 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:58:26.374069   57716 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 05:58:26.374113   57716 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:58:26.374157   57716 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:58:26.374200   57716 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:58:26.374244   57716 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:58:26.374300   57716 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:58:26.374343   57716 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:58:26.374385   57716 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 05:58:26.445771   57716 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:58:26.445880   57716 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:58:26.445970   57716 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:58:26.455518   57716 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:58:26.460828   57716 out.go:252]   - Generating certificates and keys ...
	I1210 05:58:26.460930   57716 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:58:26.461006   57716 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:58:26.461110   57716 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 05:58:26.461178   57716 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 05:58:26.461260   57716 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 05:58:26.461325   57716 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 05:58:26.461413   57716 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 05:58:26.461483   57716 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 05:58:26.461565   57716 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 05:58:26.461644   57716 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 05:58:26.461682   57716 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 05:58:26.461743   57716 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:58:26.520044   57716 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:58:27.005643   57716 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:58:27.519831   57716 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:58:27.780223   57716 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:58:28.060883   57716 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:58:28.061559   57716 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:58:28.064834   57716 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:58:28.067981   57716 out.go:252]   - Booting up control plane ...
	I1210 05:58:28.068070   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:58:28.068143   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:58:28.069383   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:58:28.090093   57716 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:58:28.090188   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:58:28.097949   57716 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:58:28.098042   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:58:28.098080   57716 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:58:28.241595   57716 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:58:28.241705   57716 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:02:28.236858   57716 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00011534s
	I1210 06:02:28.236887   57716 kubeadm.go:319] 
	I1210 06:02:28.236942   57716 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:02:28.236986   57716 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:02:28.237128   57716 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:02:28.237135   57716 kubeadm.go:319] 
	I1210 06:02:28.237233   57716 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:02:28.237262   57716 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:02:28.237291   57716 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:02:28.237295   57716 kubeadm.go:319] 
	I1210 06:02:28.241711   57716 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:02:28.242149   57716 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:02:28.242254   57716 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:02:28.242529   57716 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:02:28.242535   57716 kubeadm.go:319] 
	I1210 06:02:28.242598   57716 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:02:28.242730   57716 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00011534s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:02:28.242815   57716 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:02:28.653276   57716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:02:28.666846   57716 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:02:28.666902   57716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:02:28.676196   57716 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:02:28.676206   57716 kubeadm.go:158] found existing configuration files:
	
	I1210 06:02:28.676262   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:02:28.683929   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:02:28.683984   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:02:28.691531   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:02:28.699193   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:02:28.699247   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:02:28.706499   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:02:28.713695   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:02:28.713761   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:02:28.721311   57716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:02:28.729191   57716 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:02:28.729245   57716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:02:28.737059   57716 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:02:28.777392   57716 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:02:28.777754   57716 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:02:28.849302   57716 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:02:28.849368   57716 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:02:28.849403   57716 kubeadm.go:319] OS: Linux
	I1210 06:02:28.849460   57716 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:02:28.849508   57716 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:02:28.849555   57716 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:02:28.849602   57716 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:02:28.849649   57716 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:02:28.849696   57716 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:02:28.849745   57716 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:02:28.849792   57716 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:02:28.849837   57716 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:02:28.921564   57716 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:02:28.921662   57716 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:02:28.921748   57716 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:02:28.926509   57716 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:02:28.929904   57716 out.go:252]   - Generating certificates and keys ...
	I1210 06:02:28.929994   57716 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:02:28.930057   57716 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:02:28.930131   57716 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:02:28.930201   57716 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:02:28.930270   57716 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:02:28.930322   57716 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:02:28.930384   57716 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:02:28.930444   57716 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:02:28.930517   57716 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:02:28.930589   57716 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:02:28.930766   57716 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:02:28.930854   57716 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:02:29.206630   57716 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:02:29.720612   57716 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:02:29.887413   57716 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:02:30.011857   57716 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:02:30.197709   57716 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:02:30.198347   57716 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:02:30.201006   57716 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:02:30.204123   57716 out.go:252]   - Booting up control plane ...
	I1210 06:02:30.204220   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:02:30.204296   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:02:30.204794   57716 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:02:30.227311   57716 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:02:30.227437   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:02:30.235547   57716 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:02:30.235634   57716 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:02:30.235945   57716 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:02:30.373162   57716 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:02:30.373269   57716 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:06:30.371537   57716 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000138118s
	I1210 06:06:30.371561   57716 kubeadm.go:319] 
	I1210 06:06:30.371641   57716 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:06:30.371685   57716 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:06:30.371790   57716 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:06:30.371795   57716 kubeadm.go:319] 
	I1210 06:06:30.371898   57716 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:06:30.371929   57716 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:06:30.371959   57716 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:06:30.371962   57716 kubeadm.go:319] 
	I1210 06:06:30.376139   57716 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:06:30.376577   57716 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:06:30.376687   57716 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:06:30.376961   57716 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:06:30.376966   57716 kubeadm.go:319] 
	I1210 06:06:30.377035   57716 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:06:30.377094   57716 kubeadm.go:403] duration metric: took 12m8.33567442s to StartCluster
	I1210 06:06:30.377125   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:06:30.377187   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:06:30.401132   57716 cri.go:89] found id: ""
	I1210 06:06:30.401147   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.401154   57716 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:30.401160   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:06:30.401219   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:06:30.437615   57716 cri.go:89] found id: ""
	I1210 06:06:30.437630   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.437637   57716 logs.go:284] No container was found matching "etcd"
	I1210 06:06:30.437642   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:06:30.437699   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:06:30.462667   57716 cri.go:89] found id: ""
	I1210 06:06:30.462681   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.462688   57716 logs.go:284] No container was found matching "coredns"
	I1210 06:06:30.462693   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:06:30.462752   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:06:30.491407   57716 cri.go:89] found id: ""
	I1210 06:06:30.491420   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.491428   57716 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:30.491433   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:06:30.491493   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:06:30.516073   57716 cri.go:89] found id: ""
	I1210 06:06:30.516086   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.516092   57716 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:30.516098   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:06:30.516154   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:06:30.540636   57716 cri.go:89] found id: ""
	I1210 06:06:30.540649   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.540656   57716 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:30.540679   57716 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:06:30.540736   57716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:06:30.565548   57716 cri.go:89] found id: ""
	I1210 06:06:30.565570   57716 logs.go:282] 0 containers: []
	W1210 06:06:30.565578   57716 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:30.565586   57716 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:30.565596   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:30.620548   57716 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:30.620565   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:30.631284   57716 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:30.631299   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:30.692450   57716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:30.684857   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.685223   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.686724   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.687076   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.688628   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:30.684857   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.685223   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.686724   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.687076   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:30.688628   21521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:30.692461   57716 logs.go:123] Gathering logs for containerd ...
	I1210 06:06:30.692471   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:06:30.755422   57716 logs.go:123] Gathering logs for container status ...
	I1210 06:06:30.755444   57716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 06:06:30.784033   57716 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:06:30.784067   57716 out.go:285] * 
	W1210 06:06:30.784157   57716 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:06:30.784176   57716 out.go:285] * 
	W1210 06:06:30.786468   57716 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:06:30.793223   57716 out.go:203] 
	W1210 06:06:30.796021   57716 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000138118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:06:30.796079   57716 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:06:30.796099   57716 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:06:30.799180   57716 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477949649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477963918Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.477995246Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478012321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478021774Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478031620Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478040424Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478051649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478070291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478098854Z" level=info msg="Connect containerd service"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478383782Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.478960226Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.497963642Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498025206Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498057067Z" level=info msg="Start subscribing containerd event"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.498101696Z" level=info msg="Start recovering state"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526273092Z" level=info msg="Start event monitor"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526463774Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526536103Z" level=info msg="Start streaming server"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526593630Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526675700Z" level=info msg="runtime interface starting up..."
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526739774Z" level=info msg="starting plugins..."
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.526805581Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 05:54:20 functional-644034 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 05:54:20 functional-644034 containerd[10277]: time="2025-12-10T05:54:20.528842308Z" level=info msg="containerd successfully booted in 0.071400s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:29.787149   23041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.788026   23041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.789078   23041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.789618   23041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.791285   23041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 06:08:29 up 50 min,  0 user,  load average: 0.33, 0.25, 0.36
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:08:26 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:27 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 476.
	Dec 10 06:08:27 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:27 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:27 functional-644034 kubelet[22925]: E1210 06:08:27.457635   22925 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:27 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:27 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:28 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 477.
	Dec 10 06:08:28 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:28 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:28 functional-644034 kubelet[22931]: E1210 06:08:28.207734   22931 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:28 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:28 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:28 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 478.
	Dec 10 06:08:28 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:28 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:28 functional-644034 kubelet[22949]: E1210 06:08:28.914262   22949 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:28 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:28 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:29 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 479.
	Dec 10 06:08:29 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:29 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:29 functional-644034 kubelet[23025]: E1210 06:08:29.729706   23025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:29 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:29 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (357.561334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1210 06:06:44.571596    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:06:48.972314    4116 retry.go:31] will retry after 2.59575428s: Temporary Error: Get "http://10.107.135.83": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:07:01.569888    4116 retry.go:31] will retry after 5.821971202s: Temporary Error: Get "http://10.107.135.83": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:07:17.393068    4116 retry.go:31] will retry after 6.312509257s: Temporary Error: Get "http://10.107.135.83": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:07:33.706594    4116 retry.go:31] will retry after 14.464545572s: Temporary Error: Get "http://10.107.135.83": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 06:07:58.171505    4116 retry.go:31] will retry after 19.87342464s: Temporary Error: Get "http://10.107.135.83": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1210 06:09:47.640590    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (335.18664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 2 (307.894855ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-644034 image load --daemon kicbase/echo-server:functional-644034 --alsologtostderr                                                                   │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image          │ functional-644034 image ls                                                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image          │ functional-644034 image save kicbase/echo-server:functional-644034 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image          │ functional-644034 image rm kicbase/echo-server:functional-644034 --alsologtostderr                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image          │ functional-644034 image ls                                                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image          │ functional-644034 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image          │ functional-644034 image ls                                                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image          │ functional-644034 image save --daemon kicbase/echo-server:functional-644034 --alsologtostderr                                                                   │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh            │ functional-644034 ssh sudo cat /etc/test/nested/copy/4116/hosts                                                                                                 │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh            │ functional-644034 ssh sudo cat /etc/ssl/certs/4116.pem                                                                                                          │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh            │ functional-644034 ssh sudo cat /usr/share/ca-certificates/4116.pem                                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh            │ functional-644034 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh            │ functional-644034 ssh sudo cat /etc/ssl/certs/41162.pem                                                                                                         │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh            │ functional-644034 ssh sudo cat /usr/share/ca-certificates/41162.pem                                                                                             │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh            │ functional-644034 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image          │ functional-644034 image ls --format short --alsologtostderr                                                                                                     │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image          │ functional-644034 image ls --format yaml --alsologtostderr                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh            │ functional-644034 ssh pgrep buildkitd                                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ image          │ functional-644034 image build -t localhost/my-image:functional-644034 testdata/build --alsologtostderr                                                          │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:09 UTC │
	│ image          │ functional-644034 image ls                                                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ image          │ functional-644034 image ls --format json --alsologtostderr                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ image          │ functional-644034 image ls --format table --alsologtostderr                                                                                                     │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ update-context │ functional-644034 update-context --alsologtostderr -v=2                                                                                                         │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ update-context │ functional-644034 update-context --alsologtostderr -v=2                                                                                                         │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	│ update-context │ functional-644034 update-context --alsologtostderr -v=2                                                                                                         │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:09 UTC │ 10 Dec 25 06:09 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:08:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:08:45.411458   75070 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:08:45.411572   75070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:08:45.411578   75070 out.go:374] Setting ErrFile to fd 2...
	I1210 06:08:45.411583   75070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:08:45.411858   75070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:08:45.412318   75070 out.go:368] Setting JSON to false
	I1210 06:08:45.413062   75070 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3076,"bootTime":1765343850,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:08:45.413124   75070 start.go:143] virtualization:  
	I1210 06:08:45.416311   75070 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:08:45.420093   75070 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:08:45.420262   75070 notify.go:221] Checking for updates...
	I1210 06:08:45.426058   75070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:08:45.428921   75070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:08:45.431634   75070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:08:45.435298   75070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:08:45.438128   75070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:08:45.441516   75070 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:08:45.442087   75070 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:08:45.475268   75070 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:08:45.475386   75070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:08:45.544088   75070 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:08:45.534810687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:08:45.544195   75070 docker.go:319] overlay module found
	I1210 06:08:45.547299   75070 out.go:179] * Using the docker driver based on existing profile
	I1210 06:08:45.550158   75070 start.go:309] selected driver: docker
	I1210 06:08:45.550174   75070 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:08:45.550288   75070 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:08:45.550407   75070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:08:45.603255   75070 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:08:45.594348639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:08:45.603659   75070 cni.go:84] Creating CNI manager for ""
	I1210 06:08:45.603722   75070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:08:45.603784   75070 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:08:45.606777   75070 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:08:50 functional-644034 containerd[10277]: time="2025-12-10T06:08:50.333478093Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:08:50 functional-644034 containerd[10277]: time="2025-12-10T06:08:50.334324204Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-644034\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.381371963Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-644034\""
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.384119315Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-644034\""
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.386878810Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.396920394Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-644034\" returns successfully"
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.649657881Z" level=info msg="No images store for sha256:733f7c8d47a50649df9ff7f459c6a9f5ea6cf8b56d7479873db9382be9ff7b67"
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.651811202Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-644034\""
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.661393500Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.661820868Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-644034\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:08:52 functional-644034 containerd[10277]: time="2025-12-10T06:08:52.473777223Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-644034\""
	Dec 10 06:08:52 functional-644034 containerd[10277]: time="2025-12-10T06:08:52.476176068Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-644034\""
	Dec 10 06:08:52 functional-644034 containerd[10277]: time="2025-12-10T06:08:52.478215443Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 10 06:08:52 functional-644034 containerd[10277]: time="2025-12-10T06:08:52.486799055Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-644034\" returns successfully"
	Dec 10 06:08:53 functional-644034 containerd[10277]: time="2025-12-10T06:08:53.151517888Z" level=info msg="No images store for sha256:940ab224abbedf4641492e605bc93457ac025edf2a59d497965f90646e617a61"
	Dec 10 06:08:53 functional-644034 containerd[10277]: time="2025-12-10T06:08:53.153991195Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-644034\""
	Dec 10 06:08:53 functional-644034 containerd[10277]: time="2025-12-10T06:08:53.161932636Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:08:53 functional-644034 containerd[10277]: time="2025-12-10T06:08:53.162615620Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-644034\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:09:01 functional-644034 containerd[10277]: time="2025-12-10T06:09:01.167370249Z" level=info msg="connecting to shim xycp1s7fk3bp9bik8ns2tu6ds" address="unix:///run/containerd/s/510bb2eb4222a5434e528f152e5e1c809b059325cb6f88ca5a27a90599ab04ce" namespace=k8s.io protocol=ttrpc version=3
	Dec 10 06:09:01 functional-644034 containerd[10277]: time="2025-12-10T06:09:01.244107160Z" level=info msg="shim disconnected" id=xycp1s7fk3bp9bik8ns2tu6ds namespace=k8s.io
	Dec 10 06:09:01 functional-644034 containerd[10277]: time="2025-12-10T06:09:01.244147742Z" level=info msg="cleaning up after shim disconnected" id=xycp1s7fk3bp9bik8ns2tu6ds namespace=k8s.io
	Dec 10 06:09:01 functional-644034 containerd[10277]: time="2025-12-10T06:09:01.244157802Z" level=info msg="cleaning up dead shim" id=xycp1s7fk3bp9bik8ns2tu6ds namespace=k8s.io
	Dec 10 06:09:01 functional-644034 containerd[10277]: time="2025-12-10T06:09:01.533425164Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-644034\""
	Dec 10 06:09:01 functional-644034 containerd[10277]: time="2025-12-10T06:09:01.542954063Z" level=info msg="ImageCreate event name:\"sha256:105c584c08623efaed11abb744866aab83b40c7c1531df4183e9b5ca9d16d699\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:09:01 functional-644034 containerd[10277]: time="2025-12-10T06:09:01.543560533Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-644034\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:10:40.589030   25742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:10:40.589617   25742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:10:40.591220   25742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:10:40.591670   25742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:10:40.593096   25742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 06:10:40 up 53 min,  0 user,  load average: 0.18, 0.28, 0.36
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:10:37 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:10:37 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 10 06:10:37 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:10:37 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:10:37 functional-644034 kubelet[25612]: E1210 06:10:37.955546   25612 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:10:37 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:10:37 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:10:38 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 10 06:10:38 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:10:38 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:10:38 functional-644034 kubelet[25618]: E1210 06:10:38.703468   25618 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:10:38 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:10:38 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:10:39 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 10 06:10:39 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:10:39 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:10:39 functional-644034 kubelet[25624]: E1210 06:10:39.477782   25624 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:10:39 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:10:39 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:10:40 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 10 06:10:40 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:10:40 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:10:40 functional-644034 kubelet[25659]: E1210 06:10:40.226647   25659 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:10:40 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:10:40 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (329.223012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-644034 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-644034 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (59.405166ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-644034 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-644034
helpers_test.go:244: (dbg) docker inspect functional-644034:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	        "Created": "2025-12-10T05:39:27.309770093Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:39:27.369333205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/hosts",
	        "LogPath": "/var/lib/docker/containers/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563/e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563-json.log",
	        "Name": "/functional-644034",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-644034:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-644034",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4ca934a91703cfda542f716a0f7d9c1f815f93d9f468e9c87d82226f03e9563",
	                "LowerDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e429e0f2d9f347e225bfe87d1019975d530aa6b09d39ec5daffe06e87ac5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644034",
	                "Source": "/var/lib/docker/volumes/functional-644034/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644034",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644034",
	                "name.minikube.sigs.k8s.io": "functional-644034",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc59673393ccba3335e36ee0c068a4eda312eff988a71a741676e0b22b06a994",
	            "SandboxKey": "/var/run/docker/netns/cc59673393cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644034": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:64:7f:e3:64:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fafc19aae8b74bb81b939e69867a2a669c0a4f9b4958eec7c121e8ea683ece36",
	                    "EndpointID": "24be975d7d63f27755118cec683cc9ff3e82f4778ae63e39d9ab90999a373ce6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644034",
	                        "e4ca934a9170"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644034 -n functional-644034: exit status 2 (317.39708ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount1 --alsologtostderr -v=1                            │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh       │ functional-644034 ssh findmnt -T /mount1                                                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh       │ functional-644034 ssh findmnt -T /mount1                                                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh       │ functional-644034 ssh findmnt -T /mount2                                                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh       │ functional-644034 ssh findmnt -T /mount3                                                                                                                        │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ mount     │ -p functional-644034 --kill=true                                                                                                                                │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ start     │ -p functional-644034 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ start     │ -p functional-644034 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1               │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ start     │ -p functional-644034 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                         │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-644034 --alsologtostderr -v=1                                                                                                  │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ ssh       │ functional-644034 ssh sudo systemctl is-active docker                                                                                                           │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ ssh       │ functional-644034 ssh sudo systemctl is-active crio                                                                                                             │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │                     │
	│ image     │ functional-644034 image load --daemon kicbase/echo-server:functional-644034 --alsologtostderr                                                                   │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image     │ functional-644034 image ls                                                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image     │ functional-644034 image load --daemon kicbase/echo-server:functional-644034 --alsologtostderr                                                                   │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image     │ functional-644034 image ls                                                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image     │ functional-644034 image load --daemon kicbase/echo-server:functional-644034 --alsologtostderr                                                                   │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image     │ functional-644034 image ls                                                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image     │ functional-644034 image save kicbase/echo-server:functional-644034 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image     │ functional-644034 image rm kicbase/echo-server:functional-644034 --alsologtostderr                                                                              │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image     │ functional-644034 image ls                                                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image     │ functional-644034 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image     │ functional-644034 image ls                                                                                                                                      │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	│ image     │ functional-644034 image save --daemon kicbase/echo-server:functional-644034 --alsologtostderr                                                                   │ functional-644034 │ jenkins │ v1.37.0 │ 10 Dec 25 06:08 UTC │ 10 Dec 25 06:08 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:08:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:08:45.411458   75070 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:08:45.411572   75070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:08:45.411578   75070 out.go:374] Setting ErrFile to fd 2...
	I1210 06:08:45.411583   75070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:08:45.411858   75070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:08:45.412318   75070 out.go:368] Setting JSON to false
	I1210 06:08:45.413062   75070 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3076,"bootTime":1765343850,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:08:45.413124   75070 start.go:143] virtualization:  
	I1210 06:08:45.416311   75070 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:08:45.420093   75070 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:08:45.420262   75070 notify.go:221] Checking for updates...
	I1210 06:08:45.426058   75070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:08:45.428921   75070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:08:45.431634   75070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:08:45.435298   75070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:08:45.438128   75070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:08:45.441516   75070 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:08:45.442087   75070 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:08:45.475268   75070 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:08:45.475386   75070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:08:45.544088   75070 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:08:45.534810687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:08:45.544195   75070 docker.go:319] overlay module found
	I1210 06:08:45.547299   75070 out.go:179] * Using the docker driver based on existing profile
	I1210 06:08:45.550158   75070 start.go:309] selected driver: docker
	I1210 06:08:45.550174   75070 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:08:45.550288   75070 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:08:45.550407   75070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:08:45.603255   75070 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:08:45.594348639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:08:45.603659   75070 cni.go:84] Creating CNI manager for ""
	I1210 06:08:45.603722   75070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:08:45.603784   75070 start.go:353] cluster config:
	{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:08:45.606777   75070 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:08:49 functional-644034 containerd[10277]: time="2025-12-10T06:08:49.276148579Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-644034\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:08:50 functional-644034 containerd[10277]: time="2025-12-10T06:08:50.076886275Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-644034\""
	Dec 10 06:08:50 functional-644034 containerd[10277]: time="2025-12-10T06:08:50.079826621Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-644034\""
	Dec 10 06:08:50 functional-644034 containerd[10277]: time="2025-12-10T06:08:50.082271513Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 10 06:08:50 functional-644034 containerd[10277]: time="2025-12-10T06:08:50.094106046Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-644034\" returns successfully"
	Dec 10 06:08:50 functional-644034 containerd[10277]: time="2025-12-10T06:08:50.324341131Z" level=info msg="No images store for sha256:733f7c8d47a50649df9ff7f459c6a9f5ea6cf8b56d7479873db9382be9ff7b67"
	Dec 10 06:08:50 functional-644034 containerd[10277]: time="2025-12-10T06:08:50.326463272Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-644034\""
	Dec 10 06:08:50 functional-644034 containerd[10277]: time="2025-12-10T06:08:50.333478093Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:08:50 functional-644034 containerd[10277]: time="2025-12-10T06:08:50.334324204Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-644034\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.381371963Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-644034\""
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.384119315Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-644034\""
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.386878810Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.396920394Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-644034\" returns successfully"
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.649657881Z" level=info msg="No images store for sha256:733f7c8d47a50649df9ff7f459c6a9f5ea6cf8b56d7479873db9382be9ff7b67"
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.651811202Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-644034\""
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.661393500Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:08:51 functional-644034 containerd[10277]: time="2025-12-10T06:08:51.661820868Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-644034\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:08:52 functional-644034 containerd[10277]: time="2025-12-10T06:08:52.473777223Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-644034\""
	Dec 10 06:08:52 functional-644034 containerd[10277]: time="2025-12-10T06:08:52.476176068Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-644034\""
	Dec 10 06:08:52 functional-644034 containerd[10277]: time="2025-12-10T06:08:52.478215443Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 10 06:08:52 functional-644034 containerd[10277]: time="2025-12-10T06:08:52.486799055Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-644034\" returns successfully"
	Dec 10 06:08:53 functional-644034 containerd[10277]: time="2025-12-10T06:08:53.151517888Z" level=info msg="No images store for sha256:940ab224abbedf4641492e605bc93457ac025edf2a59d497965f90646e617a61"
	Dec 10 06:08:53 functional-644034 containerd[10277]: time="2025-12-10T06:08:53.153991195Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-644034\""
	Dec 10 06:08:53 functional-644034 containerd[10277]: time="2025-12-10T06:08:53.161932636Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:08:53 functional-644034 containerd[10277]: time="2025-12-10T06:08:53.162615620Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-644034\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:54.755860   24449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:54.756543   24449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:54.757790   24449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:54.758299   24449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:54.759904   24449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	
	
	==> kernel <==
	 06:08:54 up 51 min,  0 user,  load average: 0.85, 0.37, 0.40
	Linux functional-644034 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:08:51 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:52 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 509.
	Dec 10 06:08:52 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:52 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:52 functional-644034 kubelet[24222]: E1210 06:08:52.237619   24222 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:52 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:52 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:52 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 510.
	Dec 10 06:08:52 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:52 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:52 functional-644034 kubelet[24275]: E1210 06:08:52.978305   24275 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:52 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:52 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:53 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 511.
	Dec 10 06:08:53 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:53 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:53 functional-644034 kubelet[24340]: E1210 06:08:53.747608   24340 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:53 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:53 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:08:54 functional-644034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 512.
	Dec 10 06:08:54 functional-644034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:54 functional-644034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:08:54 functional-644034 kubelet[24372]: E1210 06:08:54.455090   24372 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:08:54 functional-644034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:08:54 functional-644034 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644034 -n functional-644034: exit status 2 (336.953573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-644034" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-644034 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-644034 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1210 06:06:38.424296   70797 out.go:360] Setting OutFile to fd 1 ...
I1210 06:06:38.425845   70797 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:06:38.425862   70797 out.go:374] Setting ErrFile to fd 2...
I1210 06:06:38.425869   70797 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:06:38.426181   70797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 06:06:38.426471   70797 mustload.go:66] Loading cluster: functional-644034
I1210 06:06:38.428891   70797 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:06:38.429483   70797 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
I1210 06:06:38.452561   70797 host.go:66] Checking if "functional-644034" exists ...
I1210 06:06:38.452858   70797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 06:06:38.560418   70797 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:06:38.549445475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 06:06:38.560536   70797 api_server.go:166] Checking apiserver status ...
I1210 06:06:38.560595   70797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 06:06:38.560637   70797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
I1210 06:06:38.613362   70797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
W1210 06:06:38.743445   70797 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 06:06:38.750340   70797 out.go:179] * The control-plane node functional-644034 apiserver is not running: (state=Stopped)
I1210 06:06:38.753401   70797 out.go:179]   To start a cluster, run: "minikube start -p functional-644034"

                                                
                                                
stdout: * The control-plane node functional-644034 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-644034"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-644034 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 70798: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-644034 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-644034 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-644034 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-644034 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-644034 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-644034 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-644034 apply -f testdata/testsvc.yaml: exit status 1 (97.726511ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-644034 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (109.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.107.135.83": Temporary Error: Get "http://10.107.135.83": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-644034 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-644034 get svc nginx-svc: exit status 1 (68.595173ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-644034 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (109.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-644034 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-644034 create deployment hello-node --image kicbase/echo-server: exit status 1 (52.601281ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-644034 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 service list: exit status 103 (259.320803ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-644034 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-644034"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-644034 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-644034 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-644034\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 service list -o json: exit status 103 (265.546202ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-644034 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-644034"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-644034 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 service --namespace=default --https --url hello-node: exit status 103 (255.686813ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-644034 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-644034"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-644034 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 service hello-node --url --format={{.IP}}: exit status 103 (251.536538ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-644034 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-644034"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-644034 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-644034 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-644034\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 service hello-node --url: exit status 103 (265.792333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-644034 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-644034"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-644034 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-644034 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-644034"
functional_test.go:1579: failed to parse "* The control-plane node functional-644034 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-644034\"": parse "* The control-plane node functional-644034 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-644034\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765346915509669717" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765346915509669717" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765346915509669717" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001/test-1765346915509669717
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.926683ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:08:35.872861    4116 retry.go:31] will retry after 686.430728ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh -- ls -la /mount-9p
E1210 06:08:37.012735    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 06:08 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 06:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 06:08 test-1765346915509669717
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh cat /mount-9p/test-1765346915509669717
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-644034 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-644034 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (55.834723ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-644034 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (265.932841ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=33367)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 10 06:08 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 10 06:08 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 10 06:08 test-1765346915509669717
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-644034 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:33367
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001:/mount-9p --alsologtostderr -v=1] stderr:
I1210 06:08:35.563545   73138 out.go:360] Setting OutFile to fd 1 ...
I1210 06:08:35.563769   73138 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:08:35.563791   73138 out.go:374] Setting ErrFile to fd 2...
I1210 06:08:35.563806   73138 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:08:35.564080   73138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 06:08:35.564351   73138 mustload.go:66] Loading cluster: functional-644034
I1210 06:08:35.564724   73138 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:08:35.565267   73138 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
I1210 06:08:35.586112   73138 host.go:66] Checking if "functional-644034" exists ...
I1210 06:08:35.586416   73138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 06:08:35.705392   73138 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:08:35.695398457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 06:08:35.705555   73138 cli_runner.go:164] Run: docker network inspect functional-644034 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 06:08:35.734631   73138 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001 into VM as /mount-9p ...
I1210 06:08:35.737643   73138 out.go:179]   - Mount type:   9p
I1210 06:08:35.740438   73138 out.go:179]   - User ID:      docker
I1210 06:08:35.743327   73138 out.go:179]   - Group ID:     docker
I1210 06:08:35.746482   73138 out.go:179]   - Version:      9p2000.L
I1210 06:08:35.749352   73138 out.go:179]   - Message Size: 262144
I1210 06:08:35.752146   73138 out.go:179]   - Options:      map[]
I1210 06:08:35.754933   73138 out.go:179]   - Bind Address: 192.168.49.1:33367
I1210 06:08:35.757757   73138 out.go:179] * Userspace file server: 
I1210 06:08:35.758125   73138 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1210 06:08:35.758208   73138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
I1210 06:08:35.782485   73138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
I1210 06:08:35.893746   73138 mount.go:180] unmount for /mount-9p ran successfully
I1210 06:08:35.893776   73138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1210 06:08:35.901767   73138 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=33367,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1210 06:08:35.911903   73138 main.go:127] stdlog: ufs.go:141 connected
I1210 06:08:35.912063   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tversion tag 65535 msize 262144 version '9P2000.L'
I1210 06:08:35.912111   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rversion tag 65535 msize 262144 version '9P2000'
I1210 06:08:35.912329   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1210 06:08:35.912384   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rattach tag 0 aqid (ed6cdb 6e034b0 'd')
I1210 06:08:35.912662   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 0
I1210 06:08:35.912716   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6cdb 6e034b0 'd') m d775 at 0 mt 1765346915 l 4096 t 0 d 0 ext )
I1210 06:08:35.915816   73138 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/.mount-process: {Name:mk955b1d2670578f624323c619cf6aa8ac56eb40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:08:35.916013   73138 mount.go:105] mount successful: ""
I1210 06:08:35.919632   73138 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun771770404/001 to /mount-9p
I1210 06:08:35.922460   73138 out.go:203] 
I1210 06:08:35.925326   73138 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1210 06:08:37.087445   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 0
I1210 06:08:37.087521   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6cdb 6e034b0 'd') m d775 at 0 mt 1765346915 l 4096 t 0 d 0 ext )
I1210 06:08:37.087904   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Twalk tag 0 fid 0 newfid 1 
I1210 06:08:37.087955   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rwalk tag 0 
I1210 06:08:37.088091   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Topen tag 0 fid 1 mode 0
I1210 06:08:37.088159   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Ropen tag 0 qid (ed6cdb 6e034b0 'd') iounit 0
I1210 06:08:37.088296   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 0
I1210 06:08:37.088341   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6cdb 6e034b0 'd') m d775 at 0 mt 1765346915 l 4096 t 0 d 0 ext )
I1210 06:08:37.088502   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tread tag 0 fid 1 offset 0 count 262120
I1210 06:08:37.088639   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rread tag 0 count 258
I1210 06:08:37.088773   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tread tag 0 fid 1 offset 258 count 261862
I1210 06:08:37.088801   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rread tag 0 count 0
I1210 06:08:37.088925   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:08:37.088952   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rread tag 0 count 0
I1210 06:08:37.089094   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1210 06:08:37.089129   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rwalk tag 0 (ed6cdc 6e034b0 '') 
I1210 06:08:37.089240   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.089276   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6cdc 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.089411   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.089444   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6cdc 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.089562   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tclunk tag 0 fid 2
I1210 06:08:37.089597   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rclunk tag 0
I1210 06:08:37.089728   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Twalk tag 0 fid 0 newfid 2 0:'test-1765346915509669717' 
I1210 06:08:37.089759   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rwalk tag 0 (ed6cde 6e034b0 '') 
I1210 06:08:37.089867   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.089898   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('test-1765346915509669717' 'jenkins' 'jenkins' '' q (ed6cde 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.090071   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.090100   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('test-1765346915509669717' 'jenkins' 'jenkins' '' q (ed6cde 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.090215   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tclunk tag 0 fid 2
I1210 06:08:37.090239   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rclunk tag 0
I1210 06:08:37.090374   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1210 06:08:37.090411   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rwalk tag 0 (ed6cdd 6e034b0 '') 
I1210 06:08:37.090522   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.090552   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6cdd 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.090678   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.090707   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6cdd 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.090873   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tclunk tag 0 fid 2
I1210 06:08:37.090896   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rclunk tag 0
I1210 06:08:37.091051   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:08:37.091083   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rread tag 0 count 0
I1210 06:08:37.091240   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tclunk tag 0 fid 1
I1210 06:08:37.091270   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rclunk tag 0
I1210 06:08:37.390928   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Twalk tag 0 fid 0 newfid 1 0:'test-1765346915509669717' 
I1210 06:08:37.391001   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rwalk tag 0 (ed6cde 6e034b0 '') 
I1210 06:08:37.391189   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 1
I1210 06:08:37.391235   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('test-1765346915509669717' 'jenkins' 'jenkins' '' q (ed6cde 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.391375   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Twalk tag 0 fid 1 newfid 2 
I1210 06:08:37.391405   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rwalk tag 0 
I1210 06:08:37.391533   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Topen tag 0 fid 2 mode 0
I1210 06:08:37.391602   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Ropen tag 0 qid (ed6cde 6e034b0 '') iounit 0
I1210 06:08:37.391748   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 1
I1210 06:08:37.391804   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('test-1765346915509669717' 'jenkins' 'jenkins' '' q (ed6cde 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.391936   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tread tag 0 fid 2 offset 0 count 262120
I1210 06:08:37.391990   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rread tag 0 count 24
I1210 06:08:37.392123   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tread tag 0 fid 2 offset 24 count 262120
I1210 06:08:37.392155   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rread tag 0 count 0
I1210 06:08:37.392300   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tread tag 0 fid 2 offset 24 count 262120
I1210 06:08:37.392335   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rread tag 0 count 0
I1210 06:08:37.392473   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tclunk tag 0 fid 2
I1210 06:08:37.392506   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rclunk tag 0
I1210 06:08:37.392656   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tclunk tag 0 fid 1
I1210 06:08:37.392693   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rclunk tag 0
I1210 06:08:37.717902   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 0
I1210 06:08:37.717974   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6cdb 6e034b0 'd') m d775 at 0 mt 1765346915 l 4096 t 0 d 0 ext )
I1210 06:08:37.718352   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Twalk tag 0 fid 0 newfid 1 
I1210 06:08:37.718445   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rwalk tag 0 
I1210 06:08:37.718604   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Topen tag 0 fid 1 mode 0
I1210 06:08:37.718661   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Ropen tag 0 qid (ed6cdb 6e034b0 'd') iounit 0
I1210 06:08:37.718825   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 0
I1210 06:08:37.718882   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6cdb 6e034b0 'd') m d775 at 0 mt 1765346915 l 4096 t 0 d 0 ext )
I1210 06:08:37.719034   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tread tag 0 fid 1 offset 0 count 262120
I1210 06:08:37.719136   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rread tag 0 count 258
I1210 06:08:37.719316   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tread tag 0 fid 1 offset 258 count 261862
I1210 06:08:37.719348   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rread tag 0 count 0
I1210 06:08:37.719500   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:08:37.719529   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rread tag 0 count 0
I1210 06:08:37.719666   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1210 06:08:37.719701   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rwalk tag 0 (ed6cdc 6e034b0 '') 
I1210 06:08:37.719856   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.719935   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6cdc 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.720082   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.720116   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6cdc 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.720234   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tclunk tag 0 fid 2
I1210 06:08:37.720257   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rclunk tag 0
I1210 06:08:37.720397   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Twalk tag 0 fid 0 newfid 2 0:'test-1765346915509669717' 
I1210 06:08:37.720430   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rwalk tag 0 (ed6cde 6e034b0 '') 
I1210 06:08:37.720541   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.720585   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('test-1765346915509669717' 'jenkins' 'jenkins' '' q (ed6cde 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.720713   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.720756   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('test-1765346915509669717' 'jenkins' 'jenkins' '' q (ed6cde 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.720882   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tclunk tag 0 fid 2
I1210 06:08:37.720904   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rclunk tag 0
I1210 06:08:37.721042   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1210 06:08:37.721076   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rwalk tag 0 (ed6cdd 6e034b0 '') 
I1210 06:08:37.721185   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.721219   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6cdd 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.721351   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tstat tag 0 fid 2
I1210 06:08:37.721384   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6cdd 6e034b0 '') m 644 at 0 mt 1765346915 l 24 t 0 d 0 ext )
I1210 06:08:37.721499   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tclunk tag 0 fid 2
I1210 06:08:37.721519   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rclunk tag 0
I1210 06:08:37.721652   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tread tag 0 fid 1 offset 258 count 262120
I1210 06:08:37.721680   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rread tag 0 count 0
I1210 06:08:37.721822   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tclunk tag 0 fid 1
I1210 06:08:37.721852   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rclunk tag 0
I1210 06:08:37.722924   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1210 06:08:37.722986   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rerror tag 0 ename 'file not found' ecode 0
I1210 06:08:37.998280   73138 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:36170 Tclunk tag 0 fid 0
I1210 06:08:37.998345   73138 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:36170 Rclunk tag 0
I1210 06:08:37.999599   73138 main.go:127] stdlog: ufs.go:147 disconnected
I1210 06:08:38.023362   73138 out.go:179] * Unmounting /mount-9p ...
I1210 06:08:38.026562   73138 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1210 06:08:38.034759   73138 mount.go:180] unmount for /mount-9p ran successfully
I1210 06:08:38.034894   73138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/.mount-process: {Name:mk955b1d2670578f624323c619cf6aa8ac56eb40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:08:38.038284   73138 out.go:203] 
W1210 06:08:38.041561   73138 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1210 06:08:38.044730   73138 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.61s)

                                                
                                    
x
+
TestKubernetesUpgrade (793.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-712093 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-712093 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.759480252s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-712093
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-712093: (1.608685222s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-712093 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-712093 status --format={{.Host}}: exit status 7 (100.177115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-712093 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1210 06:38:37.012562    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-712093 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m32.077034853s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-712093] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-712093" primary control-plane node in "kubernetes-upgrade-712093" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:38:17.203760  213962 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:38:17.203973  213962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:17.204002  213962 out.go:374] Setting ErrFile to fd 2...
	I1210 06:38:17.204022  213962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:38:17.204310  213962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:38:17.204693  213962 out.go:368] Setting JSON to false
	I1210 06:38:17.205561  213962 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4848,"bootTime":1765343850,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:38:17.205651  213962 start.go:143] virtualization:  
	I1210 06:38:17.212602  213962 out.go:179] * [kubernetes-upgrade-712093] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:38:17.218264  213962 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:38:17.218418  213962 notify.go:221] Checking for updates...
	I1210 06:38:17.226774  213962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:38:17.233580  213962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:38:17.236644  213962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:38:17.239724  213962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:38:17.242750  213962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:38:17.246208  213962 config.go:182] Loaded profile config "kubernetes-upgrade-712093": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1210 06:38:17.246782  213962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:38:17.290050  213962 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:38:17.290171  213962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:38:17.392746  213962 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:59 SystemTime:2025-12-10 06:38:17.383509051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:38:17.392841  213962 docker.go:319] overlay module found
	I1210 06:38:17.397595  213962 out.go:179] * Using the docker driver based on existing profile
	I1210 06:38:17.400960  213962 start.go:309] selected driver: docker
	I1210 06:38:17.400979  213962 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-712093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-712093 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:17.401168  213962 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:38:17.402192  213962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:38:17.503390  213962 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:59 SystemTime:2025-12-10 06:38:17.493885241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:38:17.503703  213962 cni.go:84] Creating CNI manager for ""
	I1210 06:38:17.503750  213962 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:38:17.503779  213962 start.go:353] cluster config:
	{Name:kubernetes-upgrade-712093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-712093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:17.509647  213962 out.go:179] * Starting "kubernetes-upgrade-712093" primary control-plane node in "kubernetes-upgrade-712093" cluster
	I1210 06:38:17.513204  213962 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:38:17.516670  213962 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:38:17.520124  213962 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:38:17.520305  213962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:38:17.554856  213962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:38:17.554881  213962 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:38:17.578384  213962 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 06:38:17.783765  213962 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 06:38:17.783899  213962 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/config.json ...
	I1210 06:38:17.784131  213962 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:38:17.784171  213962 start.go:360] acquireMachinesLock for kubernetes-upgrade-712093: {Name:mkfa9bd60c79c927e201ff07158e4fc77ba255fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:17.784231  213962 start.go:364] duration metric: took 31.401µs to acquireMachinesLock for "kubernetes-upgrade-712093"
	I1210 06:38:17.784249  213962 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:38:17.784254  213962 fix.go:54] fixHost starting: 
	I1210 06:38:17.784509  213962 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-712093 --format={{.State.Status}}
	I1210 06:38:17.784808  213962 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:38:17.804348  213962 fix.go:112] recreateIfNeeded on kubernetes-upgrade-712093: state=Stopped err=<nil>
	W1210 06:38:17.804375  213962 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:38:17.811395  213962 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-712093" ...
	I1210 06:38:17.811495  213962 cli_runner.go:164] Run: docker start kubernetes-upgrade-712093
	I1210 06:38:18.029769  213962 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:38:18.184135  213962 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-712093 --format={{.State.Status}}
	I1210 06:38:18.208110  213962 kic.go:430] container "kubernetes-upgrade-712093" state is running.
	I1210 06:38:18.208515  213962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-712093
	I1210 06:38:18.243271  213962 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/config.json ...
	I1210 06:38:18.243508  213962 machine.go:94] provisionDockerMachine start ...
	I1210 06:38:18.243582  213962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-712093
	I1210 06:38:18.265780  213962 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:38:18.267728  213962 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:18.268093  213962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1210 06:38:18.268104  213962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:38:18.268930  213962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36962->127.0.0.1:33013: read: connection reset by peer
	I1210 06:38:18.441037  213962 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:18.441141  213962 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:38:18.441149  213962 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 133.836µs
	I1210 06:38:18.441158  213962 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:38:18.441169  213962 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:18.441199  213962 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:38:18.441204  213962 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 35.996µs
	I1210 06:38:18.441210  213962 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:38:18.441224  213962 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:18.441256  213962 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:38:18.441261  213962 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 43.488µs
	I1210 06:38:18.441268  213962 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:38:18.441279  213962 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:18.441308  213962 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:38:18.441313  213962 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 34.979µs
	I1210 06:38:18.441319  213962 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:38:18.441328  213962 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:18.441353  213962 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:38:18.441358  213962 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 31.246µs
	I1210 06:38:18.441364  213962 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:38:18.441373  213962 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:18.441400  213962 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:38:18.441405  213962 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.707µs
	I1210 06:38:18.441411  213962 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:38:18.441419  213962 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:18.441444  213962 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:38:18.441448  213962 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 30.474µs
	I1210 06:38:18.441454  213962 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:38:18.441462  213962 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:38:18.441487  213962 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:38:18.441491  213962 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.269µs
	I1210 06:38:18.441497  213962 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:38:18.441506  213962 cache.go:87] Successfully saved all images to host disk.
	I1210 06:38:21.478499  213962 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-712093
	
	I1210 06:38:21.478521  213962 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-712093"
	I1210 06:38:21.478591  213962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-712093
	I1210 06:38:21.506442  213962 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:21.506926  213962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1210 06:38:21.506945  213962 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-712093 && echo "kubernetes-upgrade-712093" | sudo tee /etc/hostname
	I1210 06:38:21.691254  213962 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-712093
	
	I1210 06:38:21.691375  213962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-712093
	I1210 06:38:21.719293  213962 main.go:143] libmachine: Using SSH client type: native
	I1210 06:38:21.719606  213962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1210 06:38:21.719623  213962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-712093' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-712093/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-712093' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:38:21.895332  213962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:38:21.895357  213962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 06:38:21.895388  213962 ubuntu.go:190] setting up certificates
	I1210 06:38:21.895397  213962 provision.go:84] configureAuth start
	I1210 06:38:21.895457  213962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-712093
	I1210 06:38:21.928204  213962 provision.go:143] copyHostCerts
	I1210 06:38:21.928271  213962 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 06:38:21.928280  213962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 06:38:21.928351  213962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 06:38:21.928452  213962 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 06:38:21.928457  213962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 06:38:21.928485  213962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 06:38:21.928546  213962 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 06:38:21.928552  213962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 06:38:21.928576  213962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 06:38:21.928627  213962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-712093 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-712093 localhost minikube]
	I1210 06:38:22.222770  213962 provision.go:177] copyRemoteCerts
	I1210 06:38:22.222840  213962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:38:22.222884  213962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-712093
	I1210 06:38:22.264662  213962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/kubernetes-upgrade-712093/id_rsa Username:docker}
	I1210 06:38:22.373067  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:38:22.398966  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1210 06:38:22.423730  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:38:22.450174  213962 provision.go:87] duration metric: took 554.749987ms to configureAuth
	I1210 06:38:22.450199  213962 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:38:22.450425  213962 config.go:182] Loaded profile config "kubernetes-upgrade-712093": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:38:22.450432  213962 machine.go:97] duration metric: took 4.206917754s to provisionDockerMachine
	I1210 06:38:22.450440  213962 start.go:293] postStartSetup for "kubernetes-upgrade-712093" (driver="docker")
	I1210 06:38:22.450451  213962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:38:22.450504  213962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:38:22.450590  213962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-712093
	I1210 06:38:22.474059  213962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/kubernetes-upgrade-712093/id_rsa Username:docker}
	I1210 06:38:22.583361  213962 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:38:22.586765  213962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:38:22.586792  213962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:38:22.586804  213962 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 06:38:22.586858  213962 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 06:38:22.586933  213962 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 06:38:22.587058  213962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:38:22.594675  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:38:22.613561  213962 start.go:296] duration metric: took 163.092621ms for postStartSetup
	I1210 06:38:22.613637  213962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:38:22.613680  213962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-712093
	I1210 06:38:22.647793  213962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/kubernetes-upgrade-712093/id_rsa Username:docker}
	I1210 06:38:22.759724  213962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:38:22.764699  213962 fix.go:56] duration metric: took 4.980437883s for fixHost
	I1210 06:38:22.764720  213962 start.go:83] releasing machines lock for "kubernetes-upgrade-712093", held for 4.980476972s
	I1210 06:38:22.764785  213962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-712093
	I1210 06:38:22.781274  213962 ssh_runner.go:195] Run: cat /version.json
	I1210 06:38:22.781328  213962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-712093
	I1210 06:38:22.781577  213962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:38:22.781630  213962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-712093
	I1210 06:38:22.799455  213962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/kubernetes-upgrade-712093/id_rsa Username:docker}
	I1210 06:38:22.801089  213962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/kubernetes-upgrade-712093/id_rsa Username:docker}
	I1210 06:38:23.028014  213962 ssh_runner.go:195] Run: systemctl --version
	I1210 06:38:23.034727  213962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:38:23.040745  213962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:38:23.040811  213962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:38:23.051509  213962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:38:23.051531  213962 start.go:496] detecting cgroup driver to use...
	I1210 06:38:23.051578  213962 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:38:23.051628  213962 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:38:23.073220  213962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:38:23.089724  213962 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:38:23.089787  213962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:38:23.107901  213962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:38:23.128554  213962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:38:23.268402  213962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:38:23.405000  213962 docker.go:234] disabling docker service ...
	I1210 06:38:23.405064  213962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:38:23.421147  213962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:38:23.434098  213962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:38:23.585219  213962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:38:23.739917  213962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:38:23.755637  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:38:23.775478  213962 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:38:23.951924  213962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:38:23.965889  213962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:38:23.974897  213962 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:38:23.974965  213962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:38:23.984212  213962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:38:23.993202  213962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:38:24.003545  213962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:38:24.014452  213962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:38:24.024378  213962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:38:24.033805  213962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:38:24.044283  213962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:38:24.063451  213962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:38:24.073219  213962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:38:24.085928  213962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:24.239260  213962 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:38:24.456309  213962 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:38:24.456391  213962 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:38:24.461389  213962 start.go:564] Will wait 60s for crictl version
	I1210 06:38:24.461479  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:38:24.465222  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:38:24.505983  213962 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:38:24.510718  213962 ssh_runner.go:195] Run: containerd --version
	I1210 06:38:24.554729  213962 ssh_runner.go:195] Run: containerd --version
	I1210 06:38:24.586173  213962 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 06:38:24.589178  213962 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-712093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:38:24.610598  213962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:38:24.616091  213962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:38:24.630368  213962 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-712093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-712093 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:38:24.630569  213962 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:38:24.794323  213962 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:38:24.963304  213962 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:38:25.150518  213962 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:38:25.150614  213962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:38:25.187736  213962 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 06:38:25.187757  213962 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:38:25.187826  213962 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:38:25.188036  213962 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:38:25.188131  213962 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:38:25.188214  213962 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:38:25.188295  213962 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:38:25.188388  213962 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:38:25.188489  213962 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:38:25.188595  213962 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:38:25.189851  213962 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:38:25.190940  213962 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:38:25.191507  213962 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:38:25.191778  213962 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:38:25.192514  213962 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:38:25.193088  213962 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:38:25.193564  213962 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:38:25.193994  213962 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:38:25.513277  213962 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1210 06:38:25.513371  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1210 06:38:25.529916  213962 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1210 06:38:25.530000  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:38:25.545817  213962 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 06:38:25.545885  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 06:38:25.574352  213962 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1210 06:38:25.574452  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:38:25.602932  213962 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1210 06:38:25.603002  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:38:25.603416  213962 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1210 06:38:25.603464  213962 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:38:25.603522  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:38:25.603791  213962 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1210 06:38:25.603844  213962 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:38:25.603884  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:38:25.616711  213962 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 06:38:25.616748  213962 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:38:25.616802  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:38:25.638624  213962 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1210 06:38:25.638691  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:38:25.639340  213962 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1210 06:38:25.639392  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:38:25.639736  213962 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1210 06:38:25.639768  213962 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:38:25.639800  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:38:25.689684  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:38:25.689781  213962 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1210 06:38:25.689825  213962 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:38:25.689879  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:38:25.689960  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:38:25.690035  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:38:25.694354  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:38:25.694468  213962 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 06:38:25.694510  213962 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:38:25.694550  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:38:25.694637  213962 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1210 06:38:25.694658  213962 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:38:25.694684  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:38:25.822902  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:38:25.822990  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:38:25.823139  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:38:25.823208  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:38:25.823257  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:38:25.823295  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:38:25.823355  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:38:26.140609  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:38:26.140712  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:38:26.140787  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:38:26.140887  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:38:26.140995  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:38:26.141095  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:38:26.141186  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:38:26.371222  213962 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 06:38:26.371329  213962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:38:26.371424  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:38:26.371486  213962 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 06:38:26.371536  213962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:38:26.371598  213962 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 06:38:26.371641  213962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:38:26.371707  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:38:26.371757  213962 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1210 06:38:26.371820  213962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:38:26.371890  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	W1210 06:38:26.389714  213962 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 06:38:26.389841  213962 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 06:38:26.389897  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:38:26.421568  213962 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 06:38:26.421613  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1210 06:38:26.421690  213962 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 06:38:26.423001  213962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:38:26.510435  213962 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 06:38:26.510484  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1210 06:38:26.510556  213962 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:38:26.510570  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 06:38:26.510615  213962 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 06:38:26.510628  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1210 06:38:26.511073  213962 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 06:38:26.511134  213962 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:38:26.511178  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:38:26.511453  213962 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 06:38:26.511533  213962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:38:26.511558  213962 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 06:38:26.511575  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 06:38:26.511615  213962 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 06:38:26.511682  213962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	W1210 06:38:26.527400  213962 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1210 06:38:26.527446  213962 retry.go:31] will retry after 295.883241ms: ssh: rejected: connect failed (open failed)
	W1210 06:38:26.527601  213962 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1210 06:38:26.527609  213962 retry.go:31] will retry after 279.863222ms: ssh: rejected: connect failed (open failed)
	I1210 06:38:26.578330  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:38:26.578408  213962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-712093
	I1210 06:38:26.578609  213962 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 06:38:26.578641  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1210 06:38:26.578711  213962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-712093
	I1210 06:38:26.621993  213962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/kubernetes-upgrade-712093/id_rsa Username:docker}
	I1210 06:38:26.651479  213962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/kubernetes-upgrade-712093/id_rsa Username:docker}
	I1210 06:38:26.655364  213962 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:38:26.655446  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1210 06:38:26.655517  213962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-712093
	I1210 06:38:26.726003  213962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/kubernetes-upgrade-712093/id_rsa Username:docker}
	I1210 06:38:26.938591  213962 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:38:26.938767  213962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:38:27.219745  213962 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 06:38:27.219780  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1210 06:38:27.219863  213962 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:38:27.219914  213962 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:38:27.219929  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 06:38:27.389816  213962 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:38:27.389882  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:38:29.299161  213962 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.909257057s)
	I1210 06:38:29.299184  213962 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 06:38:29.299201  213962 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:38:29.299248  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:38:30.994646  213962 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.695378895s)
	I1210 06:38:30.994723  213962 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 06:38:30.994764  213962 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:38:30.994844  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:38:32.396922  213962 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.402035969s)
	I1210 06:38:32.396949  213962 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 06:38:32.396973  213962 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:38:32.397041  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:38:33.972851  213962 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.575785325s)
	I1210 06:38:33.972881  213962 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 06:38:33.972898  213962 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:38:33.972943  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:38:35.207821  213962 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.234852052s)
	I1210 06:38:35.207851  213962 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 06:38:35.207876  213962 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:38:35.207923  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:38:35.708491  213962 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:38:35.708526  213962 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:38:35.708575  213962 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:38:37.064713  213962 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.35611712s)
	I1210 06:38:37.064743  213962 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 06:38:37.064761  213962 cache_images.go:125] Successfully loaded all cached images
	I1210 06:38:37.064767  213962 cache_images.go:94] duration metric: took 11.876987186s to LoadCachedImages
	I1210 06:38:37.064775  213962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 06:38:37.064878  213962 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-712093 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-712093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:38:37.064946  213962 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:38:37.094549  213962 cni.go:84] Creating CNI manager for ""
	I1210 06:38:37.094577  213962 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:38:37.094591  213962 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:38:37.094613  213962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-712093 NodeName:kubernetes-upgrade-712093 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:38:37.094740  213962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-712093"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:38:37.094809  213962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:38:37.104246  213962 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 06:38:37.104318  213962 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:38:37.112059  213962 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1210 06:38:37.112151  213962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 06:38:37.112224  213962 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256
	I1210 06:38:37.112251  213962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:38:37.112323  213962 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:38:37.112365  213962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 06:38:37.152112  213962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 06:38:37.152418  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1210 06:38:37.152385  213962 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 06:38:37.152402  213962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 06:38:37.152622  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1210 06:38:37.168328  213962 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 06:38:37.168371  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1210 06:38:38.346988  213962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:38:38.358761  213962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1210 06:38:38.373341  213962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:38:38.386853  213962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2243 bytes)
	I1210 06:38:38.400879  213962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:38:38.404935  213962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:38:38.418442  213962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:38:38.560223  213962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:38:38.577979  213962 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093 for IP: 192.168.76.2
	I1210 06:38:38.578008  213962 certs.go:195] generating shared ca certs ...
	I1210 06:38:38.578039  213962 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:38:38.578195  213962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 06:38:38.578254  213962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 06:38:38.578267  213962 certs.go:257] generating profile certs ...
	I1210 06:38:38.578381  213962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/client.key
	I1210 06:38:38.578461  213962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/apiserver.key.598ec1dc
	I1210 06:38:38.578528  213962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/proxy-client.key
	I1210 06:38:38.578647  213962 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 06:38:38.578700  213962 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 06:38:38.578713  213962 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:38:38.578740  213962 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:38:38.578778  213962 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:38:38.578803  213962 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 06:38:38.578866  213962 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:38:38.579525  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:38:38.616373  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:38:38.650562  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:38:38.682561  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 06:38:38.712204  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 06:38:38.740203  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:38:38.760228  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:38:38.783738  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:38:38.809147  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:38:38.837718  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 06:38:38.859583  213962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 06:38:38.878163  213962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:38:38.892419  213962 ssh_runner.go:195] Run: openssl version
	I1210 06:38:38.900995  213962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:38.911271  213962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:38:38.924550  213962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:38.929111  213962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:38.929193  213962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:38:38.976138  213962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:38:38.983588  213962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 06:38:38.990879  213962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 06:38:39.001582  213962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 06:38:39.007254  213962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 06:38:39.007346  213962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 06:38:39.049704  213962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:38:39.064988  213962 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 06:38:39.073404  213962 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 06:38:39.082096  213962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 06:38:39.086641  213962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 06:38:39.086733  213962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 06:38:39.129913  213962 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:38:39.139167  213962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:38:39.144622  213962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:38:39.195473  213962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:38:39.237468  213962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:38:39.280004  213962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:38:39.322196  213962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:38:39.365437  213962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:38:39.409886  213962 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-712093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-712093 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:38:39.409995  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:38:39.410055  213962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:39.500053  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:38:39.500084  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:38:39.500090  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:38:39.500094  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:38:39.500097  213962 cri.go:89] found id: ""
	I1210 06:38:39.500167  213962 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1210 06:38:39.516524  213962 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:38:39Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1210 06:38:39.516607  213962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:38:39.525958  213962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:38:39.526004  213962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:38:39.526062  213962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:38:39.535443  213962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:39.535883  213962 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-712093" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:38:39.536006  213962 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-712093" cluster setting kubeconfig missing "kubernetes-upgrade-712093" context setting]
	I1210 06:38:39.536342  213962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:38:39.536943  213962 kapi.go:59] client config for kubernetes-upgrade-712093: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/client.crt", KeyFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/client.key", CAFile:"/home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:38:39.537759  213962 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:38:39.537795  213962 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:38:39.537806  213962 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:38:39.537812  213962 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:38:39.537834  213962 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:38:39.538180  213962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:38:39.548602  213962 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 06:37:51.697092545 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:38:38.397698015 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-712093"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-rc.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1210 06:38:39.548630  213962 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:38:39.548650  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1210 06:38:39.548712  213962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:38:39.585098  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:38:39.585124  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:38:39.585129  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:38:39.585133  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:38:39.585136  213962 cri.go:89] found id: ""
	I1210 06:38:39.585149  213962 cri.go:252] Stopping containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:38:39.585225  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:38:39.589340  213962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988
	I1210 06:38:39.620007  213962 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:38:39.636251  213962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:38:39.645017  213962 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 10 06:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 10 06:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 10 06:38 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 10 06:38 /etc/kubernetes/scheduler.conf
	
	I1210 06:38:39.645099  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:38:39.654463  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:38:39.663692  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:38:39.672914  213962 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:39.672998  213962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:38:39.681109  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:38:39.690130  213962 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:38:39.690219  213962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:38:39.698148  213962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:38:39.706551  213962 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:39.770099  213962 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:40.981859  213962 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.211722003s)
	I1210 06:38:40.981948  213962 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:41.309224  213962 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:41.451582  213962 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:38:41.555954  213962 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:38:41.556033  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:42.056829  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:42.557080  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:43.056094  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:43.557004  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:44.057066  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:44.556857  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:45.056159  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:45.556169  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:46.059168  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:46.556210  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:47.056883  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:47.556704  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:48.056665  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:48.556105  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:49.056172  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:49.557105  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:50.057128  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:50.556138  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:51.056447  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:51.556666  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:52.056190  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:52.556469  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:53.056674  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:53.556400  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:54.056748  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:54.556879  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:55.056857  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:55.557032  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:56.056253  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:56.556230  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:57.056149  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:57.556156  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:58.056153  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:58.556249  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:59.056872  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:38:59.556544  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:00.060689  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:00.556164  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:01.056150  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:01.556766  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.056437  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:02.556286  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.056319  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:03.556418  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:04.057063  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:04.556840  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:05.056124  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:05.556849  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:06.056321  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:06.557066  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:07.056745  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:07.556200  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:08.056244  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:08.556807  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:09.057104  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:09.556726  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:10.057075  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:10.556758  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:11.057216  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:11.556875  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:12.058275  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:12.556326  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:13.056934  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:13.556902  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:14.056751  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:14.556729  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:15.057038  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:15.556864  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:16.056211  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:16.556443  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:17.056991  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:17.556756  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:18.056184  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:18.556805  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:19.056154  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:19.556231  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:20.056916  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:20.556196  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:21.056904  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:21.557077  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:22.057171  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:22.556165  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:23.057103  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:23.556253  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:24.056201  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:24.556807  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:25.056178  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:25.556171  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:26.056172  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:26.556867  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:27.056149  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:27.556213  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:28.056795  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:28.556167  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:29.057123  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:29.556647  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:30.056253  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:30.556978  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:31.056821  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:31.556175  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:32.056218  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:32.556404  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:33.056228  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:33.556942  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:34.056765  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:34.556182  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:35.056432  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:35.556190  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:36.057078  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:36.556212  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:37.056984  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:37.557041  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:38.057070  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:38.556957  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:39.056170  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:39.556899  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:40.056103  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:40.556091  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:41.057092  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:41.556187  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:41.556279  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:41.613534  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:41.613555  213962 cri.go:89] found id: ""
	I1210 06:39:41.613564  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:39:41.613617  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:41.617454  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:41.617522  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:41.699115  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:41.699187  213962 cri.go:89] found id: ""
	I1210 06:39:41.699209  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:39:41.699299  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:41.709869  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:41.709988  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:41.763889  213962 cri.go:89] found id: ""
	I1210 06:39:41.763966  213962 logs.go:282] 0 containers: []
	W1210 06:39:41.763989  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:39:41.764009  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:41.764094  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:41.827872  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:41.827932  213962 cri.go:89] found id: ""
	I1210 06:39:41.827965  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:39:41.828049  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:41.832258  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:41.832331  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:41.888872  213962 cri.go:89] found id: ""
	I1210 06:39:41.888898  213962 logs.go:282] 0 containers: []
	W1210 06:39:41.888908  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:41.888914  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:41.888985  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:41.935259  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:41.935278  213962 cri.go:89] found id: ""
	I1210 06:39:41.935287  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:39:41.935340  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:41.943706  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:41.943783  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:41.989148  213962 cri.go:89] found id: ""
	I1210 06:39:41.989175  213962 logs.go:282] 0 containers: []
	W1210 06:39:41.989185  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:41.989191  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:39:41.989248  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:39:42.037208  213962 cri.go:89] found id: ""
	I1210 06:39:42.037234  213962 logs.go:282] 0 containers: []
	W1210 06:39:42.037243  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:39:42.037258  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:42.037270  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:42.139066  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:42.139207  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:42.186506  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:42.186610  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:42.210174  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:42.210262  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:42.331205  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:42.331283  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:39:42.331312  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:42.410038  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:39:42.410122  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:42.449213  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:39:42.449293  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:42.491901  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:39:42.491931  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:42.540086  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:39:42.540167  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:45.088117  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:45.101297  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:45.101391  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:45.145106  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:45.145129  213962 cri.go:89] found id: ""
	I1210 06:39:45.145138  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:39:45.145204  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:45.156148  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:45.156233  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:45.207812  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:45.207845  213962 cri.go:89] found id: ""
	I1210 06:39:45.207855  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:39:45.207976  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:45.214596  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:45.214827  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:45.257311  213962 cri.go:89] found id: ""
	I1210 06:39:45.257383  213962 logs.go:282] 0 containers: []
	W1210 06:39:45.257405  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:39:45.257423  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:45.257511  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:45.308618  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:45.308713  213962 cri.go:89] found id: ""
	I1210 06:39:45.308754  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:39:45.308858  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:45.315618  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:45.315707  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:45.357584  213962 cri.go:89] found id: ""
	I1210 06:39:45.357660  213962 logs.go:282] 0 containers: []
	W1210 06:39:45.357683  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:45.357701  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:45.357794  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:45.400029  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:45.400103  213962 cri.go:89] found id: ""
	I1210 06:39:45.400125  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:39:45.400218  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:45.406964  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:45.407105  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:45.440067  213962 cri.go:89] found id: ""
	I1210 06:39:45.440145  213962 logs.go:282] 0 containers: []
	W1210 06:39:45.440167  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:45.440186  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:39:45.440276  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:39:45.477581  213962 cri.go:89] found id: ""
	I1210 06:39:45.477621  213962 logs.go:282] 0 containers: []
	W1210 06:39:45.477629  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:39:45.477644  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:45.477659  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:45.547211  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:45.547248  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:45.651162  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:45.651184  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:39:45.651205  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:45.713577  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:39:45.713612  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:45.767647  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:39:45.767679  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:45.816415  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:39:45.816451  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:45.865071  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:39:45.865143  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:45.898014  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:45.898043  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:45.914579  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:45.914606  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:48.463414  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:48.473914  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:48.473993  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:48.497844  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:48.497867  213962 cri.go:89] found id: ""
	I1210 06:39:48.497875  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:39:48.497931  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:48.501670  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:48.501747  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:48.527183  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:48.527207  213962 cri.go:89] found id: ""
	I1210 06:39:48.527215  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:39:48.527274  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:48.530783  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:48.530905  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:48.558095  213962 cri.go:89] found id: ""
	I1210 06:39:48.558116  213962 logs.go:282] 0 containers: []
	W1210 06:39:48.558140  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:39:48.558147  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:48.558204  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:48.605757  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:48.605777  213962 cri.go:89] found id: ""
	I1210 06:39:48.605786  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:39:48.605845  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:48.611135  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:48.611207  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:48.652215  213962 cri.go:89] found id: ""
	I1210 06:39:48.652241  213962 logs.go:282] 0 containers: []
	W1210 06:39:48.652250  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:48.652256  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:48.652314  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:48.741636  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:48.741715  213962 cri.go:89] found id: ""
	I1210 06:39:48.741727  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:39:48.741792  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:48.752321  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:48.752398  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:48.788567  213962 cri.go:89] found id: ""
	I1210 06:39:48.788591  213962 logs.go:282] 0 containers: []
	W1210 06:39:48.788598  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:48.788605  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:39:48.788681  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:39:48.814653  213962 cri.go:89] found id: ""
	I1210 06:39:48.814698  213962 logs.go:282] 0 containers: []
	W1210 06:39:48.814706  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:39:48.814721  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:48.814732  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:48.920057  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:48.920086  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:39:48.920099  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:48.963717  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:48.963751  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:49.012616  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:49.012803  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:49.086906  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:49.086936  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:49.101876  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:39:49.101911  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:49.150469  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:39:49.150744  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:49.198083  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:39:49.198114  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:49.234205  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:39:49.234236  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:51.780460  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:51.791596  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:51.791661  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:51.832501  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:51.832519  213962 cri.go:89] found id: ""
	I1210 06:39:51.832527  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:39:51.832581  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:51.843758  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:51.843824  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:51.877984  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:51.878002  213962 cri.go:89] found id: ""
	I1210 06:39:51.878011  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:39:51.878121  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:51.884115  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:51.884185  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:51.932106  213962 cri.go:89] found id: ""
	I1210 06:39:51.932128  213962 logs.go:282] 0 containers: []
	W1210 06:39:51.932137  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:39:51.932143  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:51.932201  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:51.984171  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:51.984190  213962 cri.go:89] found id: ""
	I1210 06:39:51.984198  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:39:51.984251  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:51.991102  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:51.991180  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:52.032982  213962 cri.go:89] found id: ""
	I1210 06:39:52.033004  213962 logs.go:282] 0 containers: []
	W1210 06:39:52.033012  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:52.033018  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:52.033075  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:52.073383  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:52.073403  213962 cri.go:89] found id: ""
	I1210 06:39:52.073412  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:39:52.073471  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:52.077614  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:52.077724  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:52.112267  213962 cri.go:89] found id: ""
	I1210 06:39:52.112287  213962 logs.go:282] 0 containers: []
	W1210 06:39:52.112296  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:52.112309  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:39:52.112366  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:39:52.153124  213962 cri.go:89] found id: ""
	I1210 06:39:52.153145  213962 logs.go:282] 0 containers: []
	W1210 06:39:52.153154  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:39:52.153170  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:52.153181  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:52.234978  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:52.235111  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:52.250214  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:39:52.250239  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:52.294035  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:39:52.294120  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:52.344885  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:39:52.344964  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:52.391462  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:39:52.391533  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:52.447742  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:52.447781  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:52.592337  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:52.592360  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:39:52.592374  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:52.641067  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:52.641100  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:55.185341  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:55.200423  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:55.200489  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:55.286034  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:55.286053  213962 cri.go:89] found id: ""
	I1210 06:39:55.286061  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:39:55.286112  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:55.293739  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:55.293805  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:55.336602  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:55.336672  213962 cri.go:89] found id: ""
	I1210 06:39:55.336694  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:39:55.336781  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:55.344112  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:55.344187  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:55.387076  213962 cri.go:89] found id: ""
	I1210 06:39:55.387097  213962 logs.go:282] 0 containers: []
	W1210 06:39:55.387105  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:39:55.387114  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:55.387174  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:55.465810  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:55.465828  213962 cri.go:89] found id: ""
	I1210 06:39:55.465836  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:39:55.465887  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:55.475614  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:55.475704  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:55.556631  213962 cri.go:89] found id: ""
	I1210 06:39:55.556653  213962 logs.go:282] 0 containers: []
	W1210 06:39:55.556661  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:55.556673  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:55.556731  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:55.626185  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:55.626204  213962 cri.go:89] found id: ""
	I1210 06:39:55.626212  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:39:55.626267  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:55.630980  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:55.631117  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:55.670260  213962 cri.go:89] found id: ""
	I1210 06:39:55.670280  213962 logs.go:282] 0 containers: []
	W1210 06:39:55.670289  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:55.670294  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:39:55.670350  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:39:55.702878  213962 cri.go:89] found id: ""
	I1210 06:39:55.702900  213962 logs.go:282] 0 containers: []
	W1210 06:39:55.702909  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:39:55.702925  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:39:55.702937  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:55.760863  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:55.760951  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:39:55.823972  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:55.827089  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:55.919313  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:55.919407  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:55.936277  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:39:55.936301  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:55.997719  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:39:55.997805  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:56.081982  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:39:56.082060  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:56.157366  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:56.157445  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:56.354593  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:56.354657  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:39:56.354683  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:58.907374  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:39:58.917575  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:39:58.917640  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:39:58.942664  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:58.942683  213962 cri.go:89] found id: ""
	I1210 06:39:58.942691  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:39:58.942749  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:58.946149  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:39:58.946251  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:39:58.970606  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:58.970629  213962 cri.go:89] found id: ""
	I1210 06:39:58.970638  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:39:58.970700  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:58.974231  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:39:58.974304  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:39:58.997517  213962 cri.go:89] found id: ""
	I1210 06:39:58.997544  213962 logs.go:282] 0 containers: []
	W1210 06:39:58.997553  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:39:58.997564  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:39:58.997627  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:39:59.028983  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:59.029005  213962 cri.go:89] found id: ""
	I1210 06:39:59.029013  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:39:59.029068  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:59.032504  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:39:59.032573  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:39:59.059527  213962 cri.go:89] found id: ""
	I1210 06:39:59.059550  213962 logs.go:282] 0 containers: []
	W1210 06:39:59.059558  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:39:59.059570  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:39:59.059629  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:39:59.085338  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:59.085359  213962 cri.go:89] found id: ""
	I1210 06:39:59.085373  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:39:59.085428  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:39:59.089004  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:39:59.089070  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:39:59.113886  213962 cri.go:89] found id: ""
	I1210 06:39:59.113964  213962 logs.go:282] 0 containers: []
	W1210 06:39:59.113993  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:39:59.114024  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:39:59.114119  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:39:59.138449  213962 cri.go:89] found id: ""
	I1210 06:39:59.138513  213962 logs.go:282] 0 containers: []
	W1210 06:39:59.138537  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:39:59.138571  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:39:59.138598  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:39:59.197093  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:39:59.197176  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:39:59.236309  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:39:59.236341  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:39:59.296576  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:39:59.296610  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:39:59.309462  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:39:59.309491  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:39:59.380183  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:39:59.380205  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:39:59.380218  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:39:59.416939  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:39:59.416968  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:39:59.450965  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:39:59.450997  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:39:59.480129  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:39:59.480155  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:02.015142  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:02.028492  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:02.028578  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:02.060728  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:02.060750  213962 cri.go:89] found id: ""
	I1210 06:40:02.060758  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:02.060814  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:02.064898  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:02.064983  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:02.099144  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:02.099220  213962 cri.go:89] found id: ""
	I1210 06:40:02.099242  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:02.099317  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:02.104208  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:02.104279  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:02.134602  213962 cri.go:89] found id: ""
	I1210 06:40:02.134624  213962 logs.go:282] 0 containers: []
	W1210 06:40:02.134632  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:02.134639  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:02.134713  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:02.171620  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:02.171641  213962 cri.go:89] found id: ""
	I1210 06:40:02.171649  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:02.171708  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:02.176428  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:02.176550  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:02.209673  213962 cri.go:89] found id: ""
	I1210 06:40:02.209713  213962 logs.go:282] 0 containers: []
	W1210 06:40:02.209723  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:02.209729  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:02.209798  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:02.252969  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:02.253036  213962 cri.go:89] found id: ""
	I1210 06:40:02.253059  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:02.253141  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:02.257456  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:02.257540  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:02.285826  213962 cri.go:89] found id: ""
	I1210 06:40:02.285854  213962 logs.go:282] 0 containers: []
	W1210 06:40:02.285864  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:02.285870  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:02.285931  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:02.312061  213962 cri.go:89] found id: ""
	I1210 06:40:02.312084  213962 logs.go:282] 0 containers: []
	W1210 06:40:02.312093  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:02.312106  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:02.312117  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:02.369436  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:02.369472  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:02.383399  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:02.383430  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:02.460100  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:02.460119  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:02.460131  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:02.501036  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:02.501068  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:02.533096  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:02.533130  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:02.566706  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:02.566739  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:02.599817  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:02.599849  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:02.629881  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:02.629913  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:05.161197  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:05.181969  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:05.182051  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:05.240619  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:05.240642  213962 cri.go:89] found id: ""
	I1210 06:40:05.240651  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:05.240729  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:05.244697  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:05.244770  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:05.299245  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:05.299309  213962 cri.go:89] found id: ""
	I1210 06:40:05.299332  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:05.299416  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:05.303375  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:05.303486  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:05.338769  213962 cri.go:89] found id: ""
	I1210 06:40:05.338838  213962 logs.go:282] 0 containers: []
	W1210 06:40:05.338859  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:05.338878  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:05.338966  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:05.369137  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:05.369211  213962 cri.go:89] found id: ""
	I1210 06:40:05.369233  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:05.369319  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:05.373440  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:05.373552  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:05.406526  213962 cri.go:89] found id: ""
	I1210 06:40:05.406611  213962 logs.go:282] 0 containers: []
	W1210 06:40:05.406633  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:05.406663  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:05.406741  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:05.435749  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:05.435820  213962 cri.go:89] found id: ""
	I1210 06:40:05.435842  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:05.435938  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:05.439772  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:05.439885  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:05.469274  213962 cri.go:89] found id: ""
	I1210 06:40:05.469340  213962 logs.go:282] 0 containers: []
	W1210 06:40:05.469375  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:05.469394  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:05.469498  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:05.501297  213962 cri.go:89] found id: ""
	I1210 06:40:05.501377  213962 logs.go:282] 0 containers: []
	W1210 06:40:05.501399  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:05.501425  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:05.501470  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:05.555976  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:05.556047  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:05.590655  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:05.590724  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:05.632710  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:05.632871  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:05.668464  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:05.668547  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:05.682126  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:05.682205  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:05.760840  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:05.760899  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:05.760925  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:05.809082  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:05.809233  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:05.848312  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:05.848335  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:08.416597  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:08.432119  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:08.432182  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:08.481585  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:08.481605  213962 cri.go:89] found id: ""
	I1210 06:40:08.481613  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:08.481668  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:08.485788  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:08.485857  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:08.528654  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:08.528674  213962 cri.go:89] found id: ""
	I1210 06:40:08.528683  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:08.528743  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:08.533132  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:08.533285  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:08.593130  213962 cri.go:89] found id: ""
	I1210 06:40:08.593211  213962 logs.go:282] 0 containers: []
	W1210 06:40:08.593233  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:08.593251  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:08.593361  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:08.628185  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:08.628255  213962 cri.go:89] found id: ""
	I1210 06:40:08.628277  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:08.628373  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:08.634079  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:08.634236  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:08.677410  213962 cri.go:89] found id: ""
	I1210 06:40:08.677477  213962 logs.go:282] 0 containers: []
	W1210 06:40:08.677499  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:08.677517  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:08.677611  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:08.710911  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:08.710988  213962 cri.go:89] found id: ""
	I1210 06:40:08.711035  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:08.711125  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:08.715378  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:08.715498  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:08.757927  213962 cri.go:89] found id: ""
	I1210 06:40:08.758009  213962 logs.go:282] 0 containers: []
	W1210 06:40:08.758035  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:08.758073  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:08.758202  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:08.797257  213962 cri.go:89] found id: ""
	I1210 06:40:08.797345  213962 logs.go:282] 0 containers: []
	W1210 06:40:08.797368  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:08.797412  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:08.797440  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:08.886226  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:08.886298  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:09.059711  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:09.059785  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:09.059815  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:09.117428  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:09.117472  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:09.187954  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:09.187982  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:09.203683  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:09.203716  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:09.259585  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:09.259617  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:09.311723  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:09.311754  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:09.373064  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:09.373093  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:11.923151  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:11.934186  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:11.934260  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:11.995577  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:11.995599  213962 cri.go:89] found id: ""
	I1210 06:40:11.995606  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:11.995662  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:12.018759  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:12.018837  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:12.070235  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:12.070256  213962 cri.go:89] found id: ""
	I1210 06:40:12.070264  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:12.070321  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:12.074461  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:12.074549  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:12.130535  213962 cri.go:89] found id: ""
	I1210 06:40:12.130560  213962 logs.go:282] 0 containers: []
	W1210 06:40:12.130569  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:12.130575  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:12.130634  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:12.174719  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:12.174742  213962 cri.go:89] found id: ""
	I1210 06:40:12.174751  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:12.174806  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:12.178388  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:12.178457  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:12.217964  213962 cri.go:89] found id: ""
	I1210 06:40:12.217985  213962 logs.go:282] 0 containers: []
	W1210 06:40:12.217993  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:12.217999  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:12.218061  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:12.253817  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:12.253837  213962 cri.go:89] found id: ""
	I1210 06:40:12.253846  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:12.253904  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:12.258527  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:12.258596  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:12.289808  213962 cri.go:89] found id: ""
	I1210 06:40:12.289832  213962 logs.go:282] 0 containers: []
	W1210 06:40:12.289841  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:12.289848  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:12.289908  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:12.319397  213962 cri.go:89] found id: ""
	I1210 06:40:12.319421  213962 logs.go:282] 0 containers: []
	W1210 06:40:12.319430  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:12.319444  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:12.319456  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:12.413193  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:12.413215  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:12.413226  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:12.453803  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:12.453834  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:12.501917  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:12.501950  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:12.537901  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:12.537932  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:12.605596  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:12.605642  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:12.621245  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:12.621273  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:12.665369  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:12.665398  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:12.719385  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:12.719424  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:15.292599  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:15.306536  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:15.306614  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:15.346886  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:15.346908  213962 cri.go:89] found id: ""
	I1210 06:40:15.346917  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:15.346975  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:15.353944  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:15.354015  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:15.411904  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:15.411927  213962 cri.go:89] found id: ""
	I1210 06:40:15.411935  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:15.412002  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:15.415950  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:15.416019  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:15.479048  213962 cri.go:89] found id: ""
	I1210 06:40:15.479071  213962 logs.go:282] 0 containers: []
	W1210 06:40:15.479079  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:15.479085  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:15.479146  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:15.523930  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:15.523950  213962 cri.go:89] found id: ""
	I1210 06:40:15.523963  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:15.524033  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:15.527899  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:15.528027  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:15.586652  213962 cri.go:89] found id: ""
	I1210 06:40:15.586694  213962 logs.go:282] 0 containers: []
	W1210 06:40:15.586702  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:15.586709  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:15.586766  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:15.628590  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:15.628669  213962 cri.go:89] found id: ""
	I1210 06:40:15.628680  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:15.628743  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:15.632881  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:15.633022  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:15.680297  213962 cri.go:89] found id: ""
	I1210 06:40:15.680321  213962 logs.go:282] 0 containers: []
	W1210 06:40:15.680329  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:15.680336  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:15.680486  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:15.718985  213962 cri.go:89] found id: ""
	I1210 06:40:15.719005  213962 logs.go:282] 0 containers: []
	W1210 06:40:15.719048  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:15.719067  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:15.719079  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:15.734015  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:15.734040  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:15.794687  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:15.794761  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:15.849966  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:15.849992  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:15.899342  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:15.899414  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:15.953481  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:15.953558  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:16.030720  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:16.030867  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:16.166674  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:16.166691  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:16.166707  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:16.204927  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:16.204998  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:18.756287  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:18.768101  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:18.768222  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:18.811398  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:18.811421  213962 cri.go:89] found id: ""
	I1210 06:40:18.811429  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:18.811481  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:18.817219  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:18.817290  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:18.855169  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:18.855188  213962 cri.go:89] found id: ""
	I1210 06:40:18.855196  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:18.855247  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:18.864976  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:18.865070  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:18.900999  213962 cri.go:89] found id: ""
	I1210 06:40:18.901020  213962 logs.go:282] 0 containers: []
	W1210 06:40:18.901029  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:18.901035  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:18.901096  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:18.968650  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:18.968674  213962 cri.go:89] found id: ""
	I1210 06:40:18.968683  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:18.968747  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:18.981974  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:18.982053  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:19.053137  213962 cri.go:89] found id: ""
	I1210 06:40:19.053170  213962 logs.go:282] 0 containers: []
	W1210 06:40:19.053180  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:19.053186  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:19.053251  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:19.105209  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:19.105234  213962 cri.go:89] found id: ""
	I1210 06:40:19.105243  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:19.105313  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:19.109322  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:19.109413  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:19.170803  213962 cri.go:89] found id: ""
	I1210 06:40:19.170826  213962 logs.go:282] 0 containers: []
	W1210 06:40:19.170835  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:19.170841  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:19.170898  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:19.207896  213962 cri.go:89] found id: ""
	I1210 06:40:19.207921  213962 logs.go:282] 0 containers: []
	W1210 06:40:19.207929  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:19.207954  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:19.207967  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:19.304629  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:19.304647  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:19.304659  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:19.358117  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:19.358150  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:19.396471  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:19.396503  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:19.440937  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:19.441017  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:19.519929  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:19.519962  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:19.538903  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:19.538929  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:19.581996  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:19.582031  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:19.619699  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:19.619732  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:22.155647  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:22.166224  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:22.166296  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:22.190883  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:22.190902  213962 cri.go:89] found id: ""
	I1210 06:40:22.190910  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:22.190968  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:22.194732  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:22.194807  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:22.220300  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:22.220322  213962 cri.go:89] found id: ""
	I1210 06:40:22.220330  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:22.220386  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:22.223959  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:22.224028  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:22.259215  213962 cri.go:89] found id: ""
	I1210 06:40:22.259241  213962 logs.go:282] 0 containers: []
	W1210 06:40:22.259250  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:22.259256  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:22.259318  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:22.284132  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:22.284155  213962 cri.go:89] found id: ""
	I1210 06:40:22.284163  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:22.284235  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:22.287772  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:22.287844  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:22.312783  213962 cri.go:89] found id: ""
	I1210 06:40:22.312807  213962 logs.go:282] 0 containers: []
	W1210 06:40:22.312821  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:22.312827  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:22.312898  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:22.341511  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:22.341534  213962 cri.go:89] found id: ""
	I1210 06:40:22.341542  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:22.341609  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:22.345140  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:22.345231  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:22.369743  213962 cri.go:89] found id: ""
	I1210 06:40:22.369766  213962 logs.go:282] 0 containers: []
	W1210 06:40:22.369775  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:22.369781  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:22.369855  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:22.398406  213962 cri.go:89] found id: ""
	I1210 06:40:22.398428  213962 logs.go:282] 0 containers: []
	W1210 06:40:22.398437  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:22.398467  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:22.398489  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:22.455839  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:22.455871  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:22.468905  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:22.468932  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:22.533556  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:22.533574  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:22.533587  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:22.562452  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:22.562481  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:22.597427  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:22.597453  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:22.640318  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:22.640350  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:22.695228  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:22.695310  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:22.727152  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:22.727238  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:25.260999  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:25.271808  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:25.271880  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:25.301435  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:25.301456  213962 cri.go:89] found id: ""
	I1210 06:40:25.301464  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:25.301539  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:25.306594  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:25.306690  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:25.353321  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:25.353341  213962 cri.go:89] found id: ""
	I1210 06:40:25.353349  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:25.353404  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:25.356895  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:25.356980  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:25.385131  213962 cri.go:89] found id: ""
	I1210 06:40:25.385152  213962 logs.go:282] 0 containers: []
	W1210 06:40:25.385161  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:25.385167  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:25.385222  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:25.411341  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:25.411362  213962 cri.go:89] found id: ""
	I1210 06:40:25.411371  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:25.411425  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:25.415129  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:25.415198  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:25.451770  213962 cri.go:89] found id: ""
	I1210 06:40:25.451795  213962 logs.go:282] 0 containers: []
	W1210 06:40:25.451804  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:25.451811  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:25.451871  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:25.483750  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:25.483778  213962 cri.go:89] found id: ""
	I1210 06:40:25.483786  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:25.483836  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:25.488132  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:25.488204  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:25.516870  213962 cri.go:89] found id: ""
	I1210 06:40:25.516893  213962 logs.go:282] 0 containers: []
	W1210 06:40:25.516901  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:25.516907  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:25.516963  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:25.544728  213962 cri.go:89] found id: ""
	I1210 06:40:25.544756  213962 logs.go:282] 0 containers: []
	W1210 06:40:25.544764  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:25.544778  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:25.544794  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:25.577915  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:25.577946  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:25.612942  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:25.612984  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:25.680415  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:25.680448  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:25.824105  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:25.824127  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:25.824140  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:25.852076  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:25.852161  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:25.893375  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:25.893406  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:25.959707  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:25.959740  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:25.976599  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:25.976626  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:28.519148  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:28.535139  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:28.535207  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:28.565752  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:28.565774  213962 cri.go:89] found id: ""
	I1210 06:40:28.565783  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:28.565848  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:28.571156  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:28.571227  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:28.611526  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:28.611609  213962 cri.go:89] found id: ""
	I1210 06:40:28.611632  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:28.611711  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:28.616792  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:28.616865  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:28.680279  213962 cri.go:89] found id: ""
	I1210 06:40:28.680301  213962 logs.go:282] 0 containers: []
	W1210 06:40:28.680315  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:28.680322  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:28.680388  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:28.737362  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:28.737383  213962 cri.go:89] found id: ""
	I1210 06:40:28.737391  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:28.737447  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:28.749486  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:28.749557  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:28.808157  213962 cri.go:89] found id: ""
	I1210 06:40:28.808192  213962 logs.go:282] 0 containers: []
	W1210 06:40:28.808202  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:28.808208  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:28.808268  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:28.844624  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:28.844644  213962 cri.go:89] found id: ""
	I1210 06:40:28.844652  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:28.844706  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:28.855410  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:28.855481  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:28.900501  213962 cri.go:89] found id: ""
	I1210 06:40:28.900523  213962 logs.go:282] 0 containers: []
	W1210 06:40:28.900531  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:28.900538  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:28.900595  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:28.972642  213962 cri.go:89] found id: ""
	I1210 06:40:28.972667  213962 logs.go:282] 0 containers: []
	W1210 06:40:28.972675  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:28.972688  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:28.972699  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:28.995215  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:28.995240  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:29.149149  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:29.149169  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:29.149181  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:29.274123  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:29.274160  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:29.315164  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:29.315195  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:29.357105  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:29.357135  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:29.404352  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:29.404376  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:29.483719  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:29.483794  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:29.524329  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:29.524359  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:32.064195  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:32.075846  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:32.075916  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:32.113170  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:32.113195  213962 cri.go:89] found id: ""
	I1210 06:40:32.113203  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:32.113261  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:32.117267  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:32.117349  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:32.151756  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:32.151788  213962 cri.go:89] found id: ""
	I1210 06:40:32.151798  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:32.151863  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:32.155912  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:32.156003  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:32.181059  213962 cri.go:89] found id: ""
	I1210 06:40:32.181092  213962 logs.go:282] 0 containers: []
	W1210 06:40:32.181100  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:32.181107  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:32.181181  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:32.213126  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:32.213155  213962 cri.go:89] found id: ""
	I1210 06:40:32.213164  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:32.213217  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:32.217269  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:32.217365  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:32.243740  213962 cri.go:89] found id: ""
	I1210 06:40:32.243783  213962 logs.go:282] 0 containers: []
	W1210 06:40:32.243792  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:32.243799  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:32.243871  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:32.270513  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:32.270538  213962 cri.go:89] found id: ""
	I1210 06:40:32.270546  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:32.270611  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:32.274782  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:32.274874  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:32.305243  213962 cri.go:89] found id: ""
	I1210 06:40:32.305279  213962 logs.go:282] 0 containers: []
	W1210 06:40:32.305287  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:32.305294  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:32.305361  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:32.340304  213962 cri.go:89] found id: ""
	I1210 06:40:32.340331  213962 logs.go:282] 0 containers: []
	W1210 06:40:32.340340  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:32.340356  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:32.340378  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:32.353965  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:32.353994  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:32.444042  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:32.444066  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:32.444077  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:32.477838  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:32.477867  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:32.513369  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:32.513399  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:32.575639  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:32.575673  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:32.612695  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:32.612727  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:32.656783  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:32.656812  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:32.696303  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:32.696334  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:35.280648  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:35.291458  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:35.291529  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:35.323256  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:35.323277  213962 cri.go:89] found id: ""
	I1210 06:40:35.323285  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:35.323337  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:35.326963  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:35.327052  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:35.355055  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:35.355073  213962 cri.go:89] found id: ""
	I1210 06:40:35.355081  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:35.355157  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:35.358650  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:35.358719  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:35.391572  213962 cri.go:89] found id: ""
	I1210 06:40:35.391635  213962 logs.go:282] 0 containers: []
	W1210 06:40:35.391651  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:35.391659  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:35.391720  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:35.418271  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:35.418294  213962 cri.go:89] found id: ""
	I1210 06:40:35.418303  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:35.418357  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:35.421982  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:35.422053  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:35.469723  213962 cri.go:89] found id: ""
	I1210 06:40:35.469746  213962 logs.go:282] 0 containers: []
	W1210 06:40:35.469755  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:35.469761  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:35.469818  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:35.496316  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:35.496336  213962 cri.go:89] found id: ""
	I1210 06:40:35.496345  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:35.496400  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:35.500511  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:35.500582  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:35.527431  213962 cri.go:89] found id: ""
	I1210 06:40:35.527457  213962 logs.go:282] 0 containers: []
	W1210 06:40:35.527466  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:35.527472  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:35.527529  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:35.553192  213962 cri.go:89] found id: ""
	I1210 06:40:35.553216  213962 logs.go:282] 0 containers: []
	W1210 06:40:35.553227  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:35.553242  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:35.553253  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:35.637632  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:35.637653  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:35.637667  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:35.705717  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:35.705750  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:35.755829  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:35.755858  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:35.809975  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:35.810010  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:35.837938  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:35.837966  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:35.900762  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:35.900797  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:35.914803  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:35.914829  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:35.964977  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:35.965012  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:38.499135  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:38.510063  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:38.510134  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:38.543519  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:38.543543  213962 cri.go:89] found id: ""
	I1210 06:40:38.543552  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:38.543612  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:38.547976  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:38.548043  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:38.579022  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:38.579044  213962 cri.go:89] found id: ""
	I1210 06:40:38.579052  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:38.579098  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:38.583322  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:38.583396  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:38.624501  213962 cri.go:89] found id: ""
	I1210 06:40:38.624527  213962 logs.go:282] 0 containers: []
	W1210 06:40:38.624536  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:38.624542  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:38.624597  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:38.669075  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:38.669095  213962 cri.go:89] found id: ""
	I1210 06:40:38.669103  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:38.669155  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:38.673109  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:38.673181  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:38.754147  213962 cri.go:89] found id: ""
	I1210 06:40:38.754172  213962 logs.go:282] 0 containers: []
	W1210 06:40:38.754181  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:38.754188  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:38.754254  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:38.825410  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:38.825433  213962 cri.go:89] found id: ""
	I1210 06:40:38.825442  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:38.825497  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:38.829421  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:38.829490  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:38.883793  213962 cri.go:89] found id: ""
	I1210 06:40:38.883816  213962 logs.go:282] 0 containers: []
	W1210 06:40:38.883831  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:38.883839  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:38.883898  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:38.940821  213962 cri.go:89] found id: ""
	I1210 06:40:38.940843  213962 logs.go:282] 0 containers: []
	W1210 06:40:38.940851  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:38.940867  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:38.940882  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:38.954805  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:38.954829  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:39.012678  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:39.012749  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:39.049789  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:39.049870  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:39.085755  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:39.085793  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:39.120833  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:39.120866  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:39.159462  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:39.159485  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:39.257159  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:39.257201  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:39.334809  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:39.334858  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:39.334874  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:41.882760  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:41.894117  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:41.894191  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:41.924247  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:41.924271  213962 cri.go:89] found id: ""
	I1210 06:40:41.924279  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:41.924334  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:41.932109  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:41.932182  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:41.983576  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:41.983599  213962 cri.go:89] found id: ""
	I1210 06:40:41.983608  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:41.983660  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:41.987052  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:41.987122  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:42.020548  213962 cri.go:89] found id: ""
	I1210 06:40:42.020577  213962 logs.go:282] 0 containers: []
	W1210 06:40:42.020587  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:42.020594  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:42.020660  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:42.049737  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:42.049764  213962 cri.go:89] found id: ""
	I1210 06:40:42.049774  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:42.049831  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:42.056629  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:42.056703  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:42.096087  213962 cri.go:89] found id: ""
	I1210 06:40:42.096170  213962 logs.go:282] 0 containers: []
	W1210 06:40:42.096195  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:42.096214  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:42.096309  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:42.133344  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:42.133367  213962 cri.go:89] found id: ""
	I1210 06:40:42.133376  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:42.133437  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:42.138229  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:42.138303  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:42.177399  213962 cri.go:89] found id: ""
	I1210 06:40:42.177428  213962 logs.go:282] 0 containers: []
	W1210 06:40:42.177438  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:42.177445  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:42.177517  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:42.211422  213962 cri.go:89] found id: ""
	I1210 06:40:42.211450  213962 logs.go:282] 0 containers: []
	W1210 06:40:42.211459  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:42.211476  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:42.211489  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:42.278580  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:42.278620  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:42.296126  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:42.296157  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:42.359833  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:42.359868  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:42.400503  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:42.400539  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:42.439285  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:42.439321  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:42.540106  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:42.540130  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:42.540143  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:42.575854  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:42.575879  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:42.622542  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:42.622573  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:45.168326  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:45.196064  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:45.196163  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:45.243680  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:45.243740  213962 cri.go:89] found id: ""
	I1210 06:40:45.243759  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:45.243892  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:45.249616  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:45.249697  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:45.283332  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:45.283357  213962 cri.go:89] found id: ""
	I1210 06:40:45.283366  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:45.283423  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:45.287590  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:45.287664  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:45.322034  213962 cri.go:89] found id: ""
	I1210 06:40:45.322061  213962 logs.go:282] 0 containers: []
	W1210 06:40:45.322070  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:45.322076  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:45.322133  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:45.349216  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:45.349240  213962 cri.go:89] found id: ""
	I1210 06:40:45.349249  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:45.349304  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:45.353429  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:45.353502  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:45.381960  213962 cri.go:89] found id: ""
	I1210 06:40:45.381987  213962 logs.go:282] 0 containers: []
	W1210 06:40:45.381996  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:45.382003  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:45.382061  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:45.419348  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:45.419380  213962 cri.go:89] found id: ""
	I1210 06:40:45.419388  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:45.419457  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:45.423549  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:45.423624  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:45.450931  213962 cri.go:89] found id: ""
	I1210 06:40:45.450959  213962 logs.go:282] 0 containers: []
	W1210 06:40:45.450968  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:45.450975  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:45.451071  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:45.478165  213962 cri.go:89] found id: ""
	I1210 06:40:45.478192  213962 logs.go:282] 0 containers: []
	W1210 06:40:45.478201  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:45.478215  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:45.478235  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:45.542188  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:45.542226  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:45.554733  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:45.554762  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:45.641856  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:45.641875  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:45.641893  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:45.688457  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:45.688529  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:45.745909  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:45.745995  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:45.826302  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:45.826377  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:45.866448  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:45.866545  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:45.896012  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:45.896188  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:48.449488  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:48.460166  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:48.460236  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:48.499568  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:48.499592  213962 cri.go:89] found id: ""
	I1210 06:40:48.499600  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:48.499653  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:48.503775  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:48.503852  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:48.546658  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:48.546684  213962 cri.go:89] found id: ""
	I1210 06:40:48.546693  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:48.546781  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:48.550468  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:48.550544  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:48.589343  213962 cri.go:89] found id: ""
	I1210 06:40:48.589369  213962 logs.go:282] 0 containers: []
	W1210 06:40:48.589379  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:48.589386  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:48.589442  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:48.641277  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:48.641301  213962 cri.go:89] found id: ""
	I1210 06:40:48.641311  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:48.641366  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:48.645441  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:48.645516  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:48.693183  213962 cri.go:89] found id: ""
	I1210 06:40:48.693210  213962 logs.go:282] 0 containers: []
	W1210 06:40:48.693218  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:48.693224  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:48.693283  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:48.744730  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:48.744749  213962 cri.go:89] found id: ""
	I1210 06:40:48.744758  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:48.744816  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:48.753358  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:48.753462  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:48.781727  213962 cri.go:89] found id: ""
	I1210 06:40:48.781754  213962 logs.go:282] 0 containers: []
	W1210 06:40:48.781764  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:48.781770  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:48.781831  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:48.829732  213962 cri.go:89] found id: ""
	I1210 06:40:48.829758  213962 logs.go:282] 0 containers: []
	W1210 06:40:48.829767  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:48.829783  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:48.829794  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:48.896302  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:48.896372  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:48.993153  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:48.993232  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:49.024982  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:49.025062  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:49.088618  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:49.088695  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:49.160468  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:49.160502  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:49.209534  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:49.209561  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:49.265177  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:49.265225  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:49.345597  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:49.345635  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:49.486388  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:51.987120  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:52.003636  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:52.003711  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:52.091758  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:52.091782  213962 cri.go:89] found id: ""
	I1210 06:40:52.091790  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:52.091948  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:52.096704  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:52.096806  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:52.156997  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:52.157026  213962 cri.go:89] found id: ""
	I1210 06:40:52.157039  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:52.157108  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:52.175559  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:52.175643  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:52.223870  213962 cri.go:89] found id: ""
	I1210 06:40:52.223900  213962 logs.go:282] 0 containers: []
	W1210 06:40:52.223910  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:52.223916  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:52.223979  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:52.272149  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:52.272172  213962 cri.go:89] found id: ""
	I1210 06:40:52.272181  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:52.272255  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:52.279732  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:52.279836  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:52.327335  213962 cri.go:89] found id: ""
	I1210 06:40:52.327364  213962 logs.go:282] 0 containers: []
	W1210 06:40:52.327373  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:52.327379  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:52.327440  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:52.381342  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:52.381365  213962 cri.go:89] found id: ""
	I1210 06:40:52.381374  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:52.381437  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:52.391408  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:52.391499  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:52.428159  213962 cri.go:89] found id: ""
	I1210 06:40:52.428193  213962 logs.go:282] 0 containers: []
	W1210 06:40:52.428202  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:52.428208  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:52.428266  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:52.481089  213962 cri.go:89] found id: ""
	I1210 06:40:52.481114  213962 logs.go:282] 0 containers: []
	W1210 06:40:52.481123  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:52.481140  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:52.481151  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:52.595919  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:52.595941  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:52.595955  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:52.660214  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:52.660244  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:52.737605  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:52.737634  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:52.758534  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:52.758560  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:52.813118  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:52.813151  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:52.872118  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:52.872144  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:52.914611  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:52.914695  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:52.971530  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:52.971604  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:55.551378  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:55.561559  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:55.561637  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:55.594664  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:55.594684  213962 cri.go:89] found id: ""
	I1210 06:40:55.594693  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:55.594748  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:55.600914  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:55.600980  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:55.628497  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:55.628519  213962 cri.go:89] found id: ""
	I1210 06:40:55.628527  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:55.628583  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:55.632290  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:55.632360  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:55.658140  213962 cri.go:89] found id: ""
	I1210 06:40:55.658163  213962 logs.go:282] 0 containers: []
	W1210 06:40:55.658171  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:55.658178  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:55.658235  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:55.700778  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:55.700848  213962 cri.go:89] found id: ""
	I1210 06:40:55.700860  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:55.700948  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:55.705802  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:55.705879  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:55.750580  213962 cri.go:89] found id: ""
	I1210 06:40:55.750601  213962 logs.go:282] 0 containers: []
	W1210 06:40:55.750609  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:55.750615  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:55.750676  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:55.783142  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:55.783160  213962 cri.go:89] found id: ""
	I1210 06:40:55.783169  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:55.783225  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:55.787334  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:55.787403  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:55.825646  213962 cri.go:89] found id: ""
	I1210 06:40:55.825667  213962 logs.go:282] 0 containers: []
	W1210 06:40:55.825675  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:55.825681  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:55.825746  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:55.868072  213962 cri.go:89] found id: ""
	I1210 06:40:55.868149  213962 logs.go:282] 0 containers: []
	W1210 06:40:55.868171  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:55.868196  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:55.868239  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:55.883514  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:55.883595  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:55.927321  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:55.927355  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:55.994205  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:55.994293  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:56.067517  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:56.067539  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:56.067553  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:56.105492  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:56.105523  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:56.193727  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:56.193758  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:56.285186  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:56.285215  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:56.321984  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:56.322016  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:40:58.864703  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:40:58.878446  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:40:58.878518  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:40:58.904818  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:58.904841  213962 cri.go:89] found id: ""
	I1210 06:40:58.904849  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:40:58.904906  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:58.908633  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:40:58.908703  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:40:58.935435  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:58.935459  213962 cri.go:89] found id: ""
	I1210 06:40:58.935469  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:40:58.935525  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:58.939274  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:40:58.939359  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:40:58.965346  213962 cri.go:89] found id: ""
	I1210 06:40:58.965370  213962 logs.go:282] 0 containers: []
	W1210 06:40:58.965377  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:40:58.965384  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:40:58.965442  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:40:58.992788  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:58.992808  213962 cri.go:89] found id: ""
	I1210 06:40:58.992816  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:40:58.992870  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:58.996475  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:40:58.996592  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:40:59.024816  213962 cri.go:89] found id: ""
	I1210 06:40:59.024840  213962 logs.go:282] 0 containers: []
	W1210 06:40:59.024850  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:40:59.024856  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:40:59.024915  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:40:59.054648  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:59.054676  213962 cri.go:89] found id: ""
	I1210 06:40:59.054684  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:40:59.054742  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:40:59.058599  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:40:59.058687  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:40:59.084652  213962 cri.go:89] found id: ""
	I1210 06:40:59.084676  213962 logs.go:282] 0 containers: []
	W1210 06:40:59.084685  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:40:59.084691  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:40:59.084750  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:40:59.119962  213962 cri.go:89] found id: ""
	I1210 06:40:59.120002  213962 logs.go:282] 0 containers: []
	W1210 06:40:59.120011  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:40:59.120040  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:40:59.120058  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:40:59.269358  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:40:59.269378  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:40:59.269389  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:40:59.306648  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:40:59.306678  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:40:59.369877  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:40:59.369910  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:40:59.384805  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:40:59.384835  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:40:59.420323  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:40:59.420358  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:40:59.453925  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:40:59.453964  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:40:59.483915  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:40:59.483944  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:40:59.512818  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:40:59.512849  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:02.043177  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:02.055545  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:02.055622  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:02.105580  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:02.105602  213962 cri.go:89] found id: ""
	I1210 06:41:02.105610  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:02.105664  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:02.109793  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:02.109867  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:02.152743  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:02.152768  213962 cri.go:89] found id: ""
	I1210 06:41:02.152776  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:02.152844  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:02.161725  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:02.161836  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:02.252888  213962 cri.go:89] found id: ""
	I1210 06:41:02.252917  213962 logs.go:282] 0 containers: []
	W1210 06:41:02.252927  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:02.252933  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:02.253017  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:02.290239  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:02.290273  213962 cri.go:89] found id: ""
	I1210 06:41:02.290285  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:02.290368  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:02.294213  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:02.294324  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:02.358932  213962 cri.go:89] found id: ""
	I1210 06:41:02.358960  213962 logs.go:282] 0 containers: []
	W1210 06:41:02.358969  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:02.358975  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:02.359078  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:02.399059  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:02.399083  213962 cri.go:89] found id: ""
	I1210 06:41:02.399091  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:02.399167  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:02.403428  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:02.403525  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:02.445202  213962 cri.go:89] found id: ""
	I1210 06:41:02.445228  213962 logs.go:282] 0 containers: []
	W1210 06:41:02.445237  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:02.445243  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:02.445352  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:02.485933  213962 cri.go:89] found id: ""
	I1210 06:41:02.486015  213962 logs.go:282] 0 containers: []
	W1210 06:41:02.486050  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:02.486092  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:02.486123  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:02.561838  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:02.561921  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:02.577475  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:02.577554  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:02.619450  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:02.619531  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:02.661814  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:02.661953  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:02.721325  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:02.721351  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:02.847200  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:02.847219  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:02.847232  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:02.901291  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:02.901365  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:02.982693  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:02.982767  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:05.551373  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:05.561775  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:05.561889  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:05.599995  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:05.600017  213962 cri.go:89] found id: ""
	I1210 06:41:05.600025  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:05.600084  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:05.603868  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:05.603959  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:05.632022  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:05.632045  213962 cri.go:89] found id: ""
	I1210 06:41:05.632054  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:05.632110  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:05.636034  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:05.636108  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:05.660367  213962 cri.go:89] found id: ""
	I1210 06:41:05.660391  213962 logs.go:282] 0 containers: []
	W1210 06:41:05.660400  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:05.660406  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:05.660463  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:05.686891  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:05.686916  213962 cri.go:89] found id: ""
	I1210 06:41:05.686924  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:05.686980  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:05.690679  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:05.690792  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:05.715202  213962 cri.go:89] found id: ""
	I1210 06:41:05.715224  213962 logs.go:282] 0 containers: []
	W1210 06:41:05.715233  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:05.715239  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:05.715324  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:05.740661  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:05.740735  213962 cri.go:89] found id: ""
	I1210 06:41:05.740752  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:05.740822  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:05.744660  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:05.744766  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:05.776949  213962 cri.go:89] found id: ""
	I1210 06:41:05.776971  213962 logs.go:282] 0 containers: []
	W1210 06:41:05.776980  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:05.777021  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:05.777096  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:05.814102  213962 cri.go:89] found id: ""
	I1210 06:41:05.814167  213962 logs.go:282] 0 containers: []
	W1210 06:41:05.814188  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:05.814209  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:05.814221  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:05.849253  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:05.849283  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:05.880813  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:05.880844  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:05.928738  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:05.928823  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:05.998544  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:05.998594  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:06.029105  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:06.029136  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:06.062187  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:06.062222  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:06.076452  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:06.076482  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:06.145647  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:06.145670  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:06.145683  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:08.685439  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:08.696325  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:08.696401  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:08.723528  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:08.723550  213962 cri.go:89] found id: ""
	I1210 06:41:08.723560  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:08.723618  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:08.727620  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:08.727692  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:08.759265  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:08.759289  213962 cri.go:89] found id: ""
	I1210 06:41:08.759298  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:08.759359  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:08.763243  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:08.763315  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:08.790161  213962 cri.go:89] found id: ""
	I1210 06:41:08.790188  213962 logs.go:282] 0 containers: []
	W1210 06:41:08.790197  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:08.790204  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:08.790282  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:08.818173  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:08.818196  213962 cri.go:89] found id: ""
	I1210 06:41:08.818205  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:08.818277  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:08.822011  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:08.822083  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:08.851689  213962 cri.go:89] found id: ""
	I1210 06:41:08.851715  213962 logs.go:282] 0 containers: []
	W1210 06:41:08.851736  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:08.851759  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:08.851834  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:08.877220  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:08.877241  213962 cri.go:89] found id: ""
	I1210 06:41:08.877249  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:08.877327  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:08.881252  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:08.881344  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:08.906427  213962 cri.go:89] found id: ""
	I1210 06:41:08.906452  213962 logs.go:282] 0 containers: []
	W1210 06:41:08.906461  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:08.906468  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:08.906531  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:08.939340  213962 cri.go:89] found id: ""
	I1210 06:41:08.939367  213962 logs.go:282] 0 containers: []
	W1210 06:41:08.939376  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:08.939396  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:08.939408  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:09.015225  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:09.015247  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:09.015259  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:09.043999  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:09.044030  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:09.089311  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:09.089340  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:09.125551  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:09.125581  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:09.161436  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:09.161468  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:09.194435  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:09.194463  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:09.226797  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:09.226828  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:09.288580  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:09.288616  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:11.802798  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:11.813427  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:11.813540  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:11.842903  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:11.842966  213962 cri.go:89] found id: ""
	I1210 06:41:11.842989  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:11.843083  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:11.846822  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:11.846952  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:11.872280  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:11.872342  213962 cri.go:89] found id: ""
	I1210 06:41:11.872365  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:11.872452  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:11.876163  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:11.876246  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:11.901329  213962 cri.go:89] found id: ""
	I1210 06:41:11.901354  213962 logs.go:282] 0 containers: []
	W1210 06:41:11.901363  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:11.901368  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:11.901430  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:11.935321  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:11.935345  213962 cri.go:89] found id: ""
	I1210 06:41:11.935354  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:11.935408  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:11.939249  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:11.939322  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:11.964462  213962 cri.go:89] found id: ""
	I1210 06:41:11.964487  213962 logs.go:282] 0 containers: []
	W1210 06:41:11.964496  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:11.964502  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:11.964561  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:11.991505  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:11.991527  213962 cri.go:89] found id: ""
	I1210 06:41:11.991536  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:11.991602  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:11.995292  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:11.995404  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:12.026697  213962 cri.go:89] found id: ""
	I1210 06:41:12.026730  213962 logs.go:282] 0 containers: []
	W1210 06:41:12.026739  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:12.026761  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:12.026849  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:12.056531  213962 cri.go:89] found id: ""
	I1210 06:41:12.056553  213962 logs.go:282] 0 containers: []
	W1210 06:41:12.056561  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:12.056575  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:12.056587  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:12.083748  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:12.083779  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:12.111882  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:12.111915  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:12.150998  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:12.151048  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:12.164182  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:12.164213  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:12.197223  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:12.197262  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:12.229241  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:12.229276  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:12.291683  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:12.291723  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:12.363697  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:12.363718  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:12.363730  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:14.915152  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:14.935413  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:14.935486  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:14.969379  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:14.969398  213962 cri.go:89] found id: ""
	I1210 06:41:14.969406  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:14.969460  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:14.974579  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:14.974655  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:15.017388  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:15.017476  213962 cri.go:89] found id: ""
	I1210 06:41:15.017500  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:15.017601  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:15.022708  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:15.022850  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:15.053180  213962 cri.go:89] found id: ""
	I1210 06:41:15.053209  213962 logs.go:282] 0 containers: []
	W1210 06:41:15.053218  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:15.053228  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:15.053295  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:15.079599  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:15.079667  213962 cri.go:89] found id: ""
	I1210 06:41:15.079688  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:15.079764  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:15.083695  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:15.083825  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:15.110289  213962 cri.go:89] found id: ""
	I1210 06:41:15.110316  213962 logs.go:282] 0 containers: []
	W1210 06:41:15.110325  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:15.110331  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:15.110390  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:15.145238  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:15.145263  213962 cri.go:89] found id: ""
	I1210 06:41:15.145272  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:15.145325  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:15.149384  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:15.149456  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:15.174717  213962 cri.go:89] found id: ""
	I1210 06:41:15.174742  213962 logs.go:282] 0 containers: []
	W1210 06:41:15.174751  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:15.174758  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:15.174824  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:15.201047  213962 cri.go:89] found id: ""
	I1210 06:41:15.201084  213962 logs.go:282] 0 containers: []
	W1210 06:41:15.201093  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:15.201124  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:15.201148  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:15.267488  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:15.267527  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:15.281307  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:15.281341  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:15.340538  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:15.340577  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:15.387684  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:15.387714  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:15.428100  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:15.428132  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:15.511004  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:15.511056  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:15.511079  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:15.557508  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:15.557545  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:15.595702  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:15.595740  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:18.148730  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:18.160401  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:18.160472  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:18.193568  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:18.193592  213962 cri.go:89] found id: ""
	I1210 06:41:18.193601  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:18.193656  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:18.197614  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:18.197684  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:18.223767  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:18.223789  213962 cri.go:89] found id: ""
	I1210 06:41:18.223798  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:18.223854  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:18.229498  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:18.229573  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:18.257561  213962 cri.go:89] found id: ""
	I1210 06:41:18.257590  213962 logs.go:282] 0 containers: []
	W1210 06:41:18.257598  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:18.257605  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:18.257665  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:18.283425  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:18.283445  213962 cri.go:89] found id: ""
	I1210 06:41:18.283453  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:18.283508  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:18.287130  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:18.287199  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:18.320025  213962 cri.go:89] found id: ""
	I1210 06:41:18.320049  213962 logs.go:282] 0 containers: []
	W1210 06:41:18.320059  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:18.320065  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:18.320126  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:18.346761  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:18.346781  213962 cri.go:89] found id: ""
	I1210 06:41:18.346790  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:18.346849  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:18.350512  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:18.350591  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:18.378048  213962 cri.go:89] found id: ""
	I1210 06:41:18.378070  213962 logs.go:282] 0 containers: []
	W1210 06:41:18.378078  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:18.378084  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:18.378141  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:18.407841  213962 cri.go:89] found id: ""
	I1210 06:41:18.407866  213962 logs.go:282] 0 containers: []
	W1210 06:41:18.407874  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:18.407888  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:18.407899  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:18.465518  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:18.465551  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:18.535000  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:18.535095  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:18.535140  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:18.569317  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:18.569352  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:18.606856  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:18.606889  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:18.710619  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:18.710650  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:18.766290  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:18.766332  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:18.839963  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:18.839995  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:18.855169  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:18.855200  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:21.400600  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:21.411919  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:21.411987  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:21.446979  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:21.446997  213962 cri.go:89] found id: ""
	I1210 06:41:21.447005  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:21.447100  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:21.451646  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:21.451716  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:21.484253  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:21.484271  213962 cri.go:89] found id: ""
	I1210 06:41:21.484280  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:21.484333  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:21.488631  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:21.488750  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:21.527541  213962 cri.go:89] found id: ""
	I1210 06:41:21.527562  213962 logs.go:282] 0 containers: []
	W1210 06:41:21.527571  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:21.527579  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:21.527636  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:21.560928  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:21.560947  213962 cri.go:89] found id: ""
	I1210 06:41:21.560955  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:21.561008  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:21.568147  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:21.568216  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:21.605375  213962 cri.go:89] found id: ""
	I1210 06:41:21.605395  213962 logs.go:282] 0 containers: []
	W1210 06:41:21.605404  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:21.605410  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:21.605466  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:21.639811  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:21.639830  213962 cri.go:89] found id: ""
	I1210 06:41:21.639839  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:21.639896  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:21.644803  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:21.644873  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:21.682040  213962 cri.go:89] found id: ""
	I1210 06:41:21.682061  213962 logs.go:282] 0 containers: []
	W1210 06:41:21.682070  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:21.682076  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:21.682148  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:21.717591  213962 cri.go:89] found id: ""
	I1210 06:41:21.717612  213962 logs.go:282] 0 containers: []
	W1210 06:41:21.717621  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:21.717634  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:21.717646  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:21.734792  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:21.734818  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:21.782785  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:21.782820  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:21.814260  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:21.814299  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:21.853435  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:21.853463  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:21.893376  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:21.893413  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:21.992324  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:21.992412  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:22.070265  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:22.070293  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:22.070310  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:22.121774  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:22.121852  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:24.661336  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:24.672122  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:24.672191  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:24.696931  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:24.696952  213962 cri.go:89] found id: ""
	I1210 06:41:24.696960  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:24.697016  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:24.700847  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:24.700915  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:24.725861  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:24.725890  213962 cri.go:89] found id: ""
	I1210 06:41:24.725898  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:24.725961  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:24.729688  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:24.729760  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:24.759916  213962 cri.go:89] found id: ""
	I1210 06:41:24.759936  213962 logs.go:282] 0 containers: []
	W1210 06:41:24.759945  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:24.759951  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:24.760007  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:24.787691  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:24.787713  213962 cri.go:89] found id: ""
	I1210 06:41:24.787721  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:24.787806  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:24.791455  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:24.791521  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:24.817275  213962 cri.go:89] found id: ""
	I1210 06:41:24.817298  213962 logs.go:282] 0 containers: []
	W1210 06:41:24.817306  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:24.817312  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:24.817375  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:24.846710  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:24.846776  213962 cri.go:89] found id: ""
	I1210 06:41:24.846800  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:24.846870  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:24.850582  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:24.850712  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:24.875183  213962 cri.go:89] found id: ""
	I1210 06:41:24.875207  213962 logs.go:282] 0 containers: []
	W1210 06:41:24.875215  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:24.875221  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:24.875288  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:24.901339  213962 cri.go:89] found id: ""
	I1210 06:41:24.901420  213962 logs.go:282] 0 containers: []
	W1210 06:41:24.901450  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:24.901480  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:24.901504  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:24.967902  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:24.967984  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:25.044866  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:25.044885  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:25.044897  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:25.080953  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:25.080991  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:25.115003  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:25.115206  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:25.147587  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:25.147617  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:25.182742  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:25.182897  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:25.218642  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:25.218744  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:25.234333  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:25.234360  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:27.773748  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:27.786174  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:27.786237  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:27.817886  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:27.817905  213962 cri.go:89] found id: ""
	I1210 06:41:27.817914  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:27.817966  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:27.821988  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:27.822055  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:27.853007  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:27.853030  213962 cri.go:89] found id: ""
	I1210 06:41:27.853038  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:27.853090  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:27.857810  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:27.857895  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:27.905374  213962 cri.go:89] found id: ""
	I1210 06:41:27.905395  213962 logs.go:282] 0 containers: []
	W1210 06:41:27.905402  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:27.905412  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:27.905462  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:27.949429  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:27.949444  213962 cri.go:89] found id: ""
	I1210 06:41:27.949452  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:27.949494  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:27.954519  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:27.954581  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:28.022148  213962 cri.go:89] found id: ""
	I1210 06:41:28.022170  213962 logs.go:282] 0 containers: []
	W1210 06:41:28.022179  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:28.022185  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:28.022243  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:28.063342  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:28.063366  213962 cri.go:89] found id: ""
	I1210 06:41:28.063374  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:28.063437  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:28.066973  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:28.067073  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:28.091370  213962 cri.go:89] found id: ""
	I1210 06:41:28.091392  213962 logs.go:282] 0 containers: []
	W1210 06:41:28.091401  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:28.091407  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:28.091465  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:28.129702  213962 cri.go:89] found id: ""
	I1210 06:41:28.129723  213962 logs.go:282] 0 containers: []
	W1210 06:41:28.129731  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:28.129744  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:28.129755  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:28.202557  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:28.202611  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:28.238583  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:28.238640  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:28.293358  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:28.293525  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:28.333998  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:28.334042  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:28.382643  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:28.382671  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:28.398347  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:28.398424  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:28.498288  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:28.498359  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:28.498396  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:28.544848  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:28.545172  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:31.096615  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:31.107933  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:31.108005  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:31.137883  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:31.137904  213962 cri.go:89] found id: ""
	I1210 06:41:31.137912  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:31.137973  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:31.141856  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:31.141924  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:31.169344  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:31.169363  213962 cri.go:89] found id: ""
	I1210 06:41:31.169371  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:31.169422  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:31.173062  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:31.173132  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:31.207936  213962 cri.go:89] found id: ""
	I1210 06:41:31.207956  213962 logs.go:282] 0 containers: []
	W1210 06:41:31.207965  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:31.207971  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:31.208024  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:31.252535  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:31.252554  213962 cri.go:89] found id: ""
	I1210 06:41:31.252562  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:31.252615  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:31.256956  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:31.257026  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:31.294394  213962 cri.go:89] found id: ""
	I1210 06:41:31.294416  213962 logs.go:282] 0 containers: []
	W1210 06:41:31.294425  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:31.294439  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:31.294499  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:31.327244  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:31.327263  213962 cri.go:89] found id: ""
	I1210 06:41:31.327271  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:31.327323  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:31.332372  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:31.332445  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:31.369633  213962 cri.go:89] found id: ""
	I1210 06:41:31.369659  213962 logs.go:282] 0 containers: []
	W1210 06:41:31.369668  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:31.369676  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:31.369732  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:31.405879  213962 cri.go:89] found id: ""
	I1210 06:41:31.405906  213962 logs.go:282] 0 containers: []
	W1210 06:41:31.405915  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:31.405930  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:31.405941  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:31.476537  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:31.476590  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:31.489562  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:31.489588  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:31.527251  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:31.527281  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:31.571404  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:31.571436  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:31.648051  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:31.648074  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:31.648086  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:31.718955  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:31.718987  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:31.788721  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:31.788793  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:31.833046  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:31.833119  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:34.376704  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:34.388094  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:34.388167  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:34.414172  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:34.414191  213962 cri.go:89] found id: ""
	I1210 06:41:34.414199  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:34.414257  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:34.417838  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:34.417905  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:34.444372  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:34.444391  213962 cri.go:89] found id: ""
	I1210 06:41:34.444399  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:34.444455  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:34.448091  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:34.448163  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:34.476533  213962 cri.go:89] found id: ""
	I1210 06:41:34.476559  213962 logs.go:282] 0 containers: []
	W1210 06:41:34.476567  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:34.476573  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:34.476631  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:34.502633  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:34.502651  213962 cri.go:89] found id: ""
	I1210 06:41:34.502659  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:34.502714  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:34.506497  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:34.506566  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:34.530620  213962 cri.go:89] found id: ""
	I1210 06:41:34.530641  213962 logs.go:282] 0 containers: []
	W1210 06:41:34.530649  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:34.530655  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:34.530710  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:34.558819  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:34.558842  213962 cri.go:89] found id: ""
	I1210 06:41:34.558851  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:34.558909  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:34.562575  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:34.562668  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:34.603901  213962 cri.go:89] found id: ""
	I1210 06:41:34.603926  213962 logs.go:282] 0 containers: []
	W1210 06:41:34.603935  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:34.603941  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:34.604026  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:34.652726  213962 cri.go:89] found id: ""
	I1210 06:41:34.652750  213962 logs.go:282] 0 containers: []
	W1210 06:41:34.652758  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:34.652801  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:34.652819  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:34.759249  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:34.760208  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:34.888955  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:34.888977  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:34.888991  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:34.937289  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:34.937326  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:34.989760  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:34.989837  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:35.043838  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:35.043868  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:35.058483  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:35.058512  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:35.101689  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:35.101723  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:35.139191  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:35.139219  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:37.680176  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:37.690302  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:37.690372  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:37.721800  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:37.721822  213962 cri.go:89] found id: ""
	I1210 06:41:37.721830  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:37.721881  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:37.725725  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:37.725795  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:37.754442  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:37.754464  213962 cri.go:89] found id: ""
	I1210 06:41:37.754473  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:37.754523  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:37.758619  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:37.758687  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:37.795850  213962 cri.go:89] found id: ""
	I1210 06:41:37.795869  213962 logs.go:282] 0 containers: []
	W1210 06:41:37.795877  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:37.795883  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:37.795939  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:37.839118  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:37.839136  213962 cri.go:89] found id: ""
	I1210 06:41:37.839144  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:37.839203  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:37.842859  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:37.842926  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:37.868974  213962 cri.go:89] found id: ""
	I1210 06:41:37.868995  213962 logs.go:282] 0 containers: []
	W1210 06:41:37.869004  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:37.869010  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:37.869066  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:37.898763  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:37.898786  213962 cri.go:89] found id: ""
	I1210 06:41:37.898795  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:37.898847  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:37.902832  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:37.902908  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:37.949449  213962 cri.go:89] found id: ""
	I1210 06:41:37.949473  213962 logs.go:282] 0 containers: []
	W1210 06:41:37.949482  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:37.949488  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:37.949547  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:37.997876  213962 cri.go:89] found id: ""
	I1210 06:41:37.997905  213962 logs.go:282] 0 containers: []
	W1210 06:41:37.997914  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:37.997929  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:37.997941  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:38.080580  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:38.080658  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:38.095286  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:38.095366  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:38.189203  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:38.189221  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:38.189233  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:38.241179  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:38.241259  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:38.278986  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:38.279188  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:38.314095  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:38.314131  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:38.356746  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:38.356787  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:38.393504  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:38.393528  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:40.931158  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:40.943786  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:40.943855  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:41.035915  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:41.035934  213962 cri.go:89] found id: ""
	I1210 06:41:41.035943  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:41.035998  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:41.042347  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:41.042419  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:41.085134  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:41.085153  213962 cri.go:89] found id: ""
	I1210 06:41:41.085161  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:41.085212  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:41.088806  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:41.088871  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:41.132278  213962 cri.go:89] found id: ""
	I1210 06:41:41.132302  213962 logs.go:282] 0 containers: []
	W1210 06:41:41.132311  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:41.132317  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:41.132372  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:41.182992  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:41.183025  213962 cri.go:89] found id: ""
	I1210 06:41:41.183034  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:41.183089  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:41.188763  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:41.188832  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:41.233837  213962 cri.go:89] found id: ""
	I1210 06:41:41.233863  213962 logs.go:282] 0 containers: []
	W1210 06:41:41.233872  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:41.233878  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:41.233941  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:41.265303  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:41.265329  213962 cri.go:89] found id: ""
	I1210 06:41:41.265338  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:41.265432  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:41.269262  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:41.269333  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:41.312972  213962 cri.go:89] found id: ""
	I1210 06:41:41.312997  213962 logs.go:282] 0 containers: []
	W1210 06:41:41.313006  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:41.313012  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:41.313065  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:41.363821  213962 cri.go:89] found id: ""
	I1210 06:41:41.363845  213962 logs.go:282] 0 containers: []
	W1210 06:41:41.363854  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:41.363868  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:41.363880  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:41.539648  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:41.539686  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:41.555270  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:41.555295  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:41.700391  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:41.700413  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:41.700425  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:41.811941  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:41.811974  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:41.894295  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:41.894332  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:41.957685  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:41.957714  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:42.016648  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:42.016686  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:42.089455  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:42.090346  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:44.647137  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:44.658227  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:44.658297  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:44.684882  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:44.684903  213962 cri.go:89] found id: ""
	I1210 06:41:44.684911  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:44.684966  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:44.689032  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:44.689102  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:44.715869  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:44.715889  213962 cri.go:89] found id: ""
	I1210 06:41:44.715897  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:44.715952  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:44.719953  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:44.720078  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:44.750178  213962 cri.go:89] found id: ""
	I1210 06:41:44.750207  213962 logs.go:282] 0 containers: []
	W1210 06:41:44.750217  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:44.750223  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:44.750279  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:44.790591  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:44.790614  213962 cri.go:89] found id: ""
	I1210 06:41:44.790624  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:44.790726  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:44.794784  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:44.794900  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:44.825240  213962 cri.go:89] found id: ""
	I1210 06:41:44.825303  213962 logs.go:282] 0 containers: []
	W1210 06:41:44.825327  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:44.825346  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:44.825419  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:44.857383  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:44.857446  213962 cri.go:89] found id: ""
	I1210 06:41:44.857469  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:44.857538  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:44.861770  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:44.861882  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:44.888060  213962 cri.go:89] found id: ""
	I1210 06:41:44.888123  213962 logs.go:282] 0 containers: []
	W1210 06:41:44.888148  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:44.888168  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:44.888240  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:44.918967  213962 cri.go:89] found id: ""
	I1210 06:41:44.919044  213962 logs.go:282] 0 containers: []
	W1210 06:41:44.919071  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:44.919099  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:44.919136  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:44.960429  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:44.960498  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:44.995045  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:44.995117  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:45.072385  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:45.072557  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:45.128125  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:45.128232  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:45.149420  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:45.149455  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:45.276390  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:45.276416  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:45.276428  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:45.325343  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:45.325372  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:45.398926  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:45.398962  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:47.939141  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:47.949580  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:47.949646  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:47.986815  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:47.986835  213962 cri.go:89] found id: ""
	I1210 06:41:47.986843  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:47.986897  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:47.993250  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:47.993322  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:48.032567  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:48.032588  213962 cri.go:89] found id: ""
	I1210 06:41:48.032596  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:48.032673  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:48.037541  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:48.037619  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:48.080267  213962 cri.go:89] found id: ""
	I1210 06:41:48.080290  213962 logs.go:282] 0 containers: []
	W1210 06:41:48.080298  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:48.080306  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:48.080364  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:48.112476  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:48.112500  213962 cri.go:89] found id: ""
	I1210 06:41:48.112509  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:48.112565  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:48.116690  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:48.116815  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:48.151644  213962 cri.go:89] found id: ""
	I1210 06:41:48.151664  213962 logs.go:282] 0 containers: []
	W1210 06:41:48.151673  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:48.151679  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:48.151738  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:48.236614  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:48.236632  213962 cri.go:89] found id: ""
	I1210 06:41:48.236640  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:48.236694  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:48.241174  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:48.241244  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:48.273541  213962 cri.go:89] found id: ""
	I1210 06:41:48.273562  213962 logs.go:282] 0 containers: []
	W1210 06:41:48.273571  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:48.273577  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:48.273636  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:48.313288  213962 cri.go:89] found id: ""
	I1210 06:41:48.313309  213962 logs.go:282] 0 containers: []
	W1210 06:41:48.313318  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:48.313333  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:48.313344  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:48.348042  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:48.348113  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:48.385352  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:48.385426  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:48.463663  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:48.463692  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:48.463705  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:48.524143  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:48.524216  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:48.562328  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:48.562354  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:48.612391  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:48.612469  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:48.646218  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:48.646290  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:48.710237  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:48.710319  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:51.230393  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:51.241730  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:51.241796  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:51.268806  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:51.268824  213962 cri.go:89] found id: ""
	I1210 06:41:51.268832  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:51.268883  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:51.272908  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:51.272974  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:51.298778  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:51.298796  213962 cri.go:89] found id: ""
	I1210 06:41:51.298803  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:51.298857  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:51.302875  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:51.302990  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:51.329369  213962 cri.go:89] found id: ""
	I1210 06:41:51.329390  213962 logs.go:282] 0 containers: []
	W1210 06:41:51.329397  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:51.329404  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:51.329458  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:51.355900  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:51.355916  213962 cri.go:89] found id: ""
	I1210 06:41:51.355924  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:51.355971  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:51.359956  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:51.360015  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:51.387902  213962 cri.go:89] found id: ""
	I1210 06:41:51.387923  213962 logs.go:282] 0 containers: []
	W1210 06:41:51.387931  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:51.387936  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:51.387990  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:51.423284  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:51.423302  213962 cri.go:89] found id: ""
	I1210 06:41:51.423310  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:51.423366  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:51.432486  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:51.432556  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:51.494827  213962 cri.go:89] found id: ""
	I1210 06:41:51.494856  213962 logs.go:282] 0 containers: []
	W1210 06:41:51.494865  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:51.494871  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:51.494927  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:51.520863  213962 cri.go:89] found id: ""
	I1210 06:41:51.520888  213962 logs.go:282] 0 containers: []
	W1210 06:41:51.520897  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:51.520913  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:51.520927  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:51.564067  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:51.564095  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:51.628109  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:51.628144  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:51.642116  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:51.642150  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:51.776535  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:51.776560  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:51.776573  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:51.817343  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:51.817373  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:51.856552  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:51.856581  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:51.892853  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:51.892883  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:51.939660  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:51.939693  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:54.472207  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:54.483767  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:54.483832  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:54.517525  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:54.517545  213962 cri.go:89] found id: ""
	I1210 06:41:54.517553  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:54.517609  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:54.522043  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:54.522113  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:54.548657  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:54.548675  213962 cri.go:89] found id: ""
	I1210 06:41:54.548684  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:54.548736  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:54.552967  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:54.553088  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:54.581389  213962 cri.go:89] found id: ""
	I1210 06:41:54.581411  213962 logs.go:282] 0 containers: []
	W1210 06:41:54.581420  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:54.581426  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:54.581483  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:54.620922  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:54.620940  213962 cri.go:89] found id: ""
	I1210 06:41:54.620948  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:54.621006  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:54.625316  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:54.625399  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:54.652910  213962 cri.go:89] found id: ""
	I1210 06:41:54.652932  213962 logs.go:282] 0 containers: []
	W1210 06:41:54.652940  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:54.652947  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:54.653005  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:54.695672  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:54.695748  213962 cri.go:89] found id: ""
	I1210 06:41:54.695770  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:54.695850  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:54.707619  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:54.707688  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:54.761739  213962 cri.go:89] found id: ""
	I1210 06:41:54.761765  213962 logs.go:282] 0 containers: []
	W1210 06:41:54.761774  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:54.761780  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:54.761894  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:54.799725  213962 cri.go:89] found id: ""
	I1210 06:41:54.799762  213962 logs.go:282] 0 containers: []
	W1210 06:41:54.799771  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:54.799819  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:54.799842  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:54.848657  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:54.848693  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:54.890354  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:54.890395  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:54.988190  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:54.988208  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:54.988221  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:55.042144  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:55.042179  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:55.076449  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:55.076481  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:55.111882  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:55.111917  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:41:55.145060  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:55.145088  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:55.218011  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:55.218048  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:57.735841  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:41:57.747529  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:41:57.747596  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:41:57.783214  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:57.783233  213962 cri.go:89] found id: ""
	I1210 06:41:57.783241  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:41:57.783310  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:57.787496  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:41:57.787567  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:41:57.815913  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:57.815935  213962 cri.go:89] found id: ""
	I1210 06:41:57.815944  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:41:57.815998  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:57.819984  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:41:57.820052  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:41:57.849031  213962 cri.go:89] found id: ""
	I1210 06:41:57.849052  213962 logs.go:282] 0 containers: []
	W1210 06:41:57.849061  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:41:57.849068  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:41:57.849128  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:41:57.880196  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:57.880215  213962 cri.go:89] found id: ""
	I1210 06:41:57.880222  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:41:57.880288  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:57.884273  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:41:57.884335  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:41:57.917309  213962 cri.go:89] found id: ""
	I1210 06:41:57.917331  213962 logs.go:282] 0 containers: []
	W1210 06:41:57.917339  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:41:57.917345  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:41:57.917404  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:41:57.944878  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:57.944903  213962 cri.go:89] found id: ""
	I1210 06:41:57.944911  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:41:57.944967  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:41:57.949126  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:41:57.949243  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:41:57.980284  213962 cri.go:89] found id: ""
	I1210 06:41:57.980307  213962 logs.go:282] 0 containers: []
	W1210 06:41:57.980316  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:41:57.980322  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:41:57.980383  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:41:58.019253  213962 cri.go:89] found id: ""
	I1210 06:41:58.019277  213962 logs.go:282] 0 containers: []
	W1210 06:41:58.019285  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:41:58.019299  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:41:58.019311  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:41:58.088477  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:41:58.088552  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:41:58.102086  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:41:58.102111  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:41:58.219936  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:41:58.219953  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:41:58.219964  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:41:58.285197  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:41:58.285266  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:41:58.322677  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:41:58.322758  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:41:58.386235  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:41:58.386310  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:41:58.428983  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:41:58.429120  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:41:58.477833  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:41:58.477909  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:01.038482  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:01.050024  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:01.050094  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:01.079233  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:01.079253  213962 cri.go:89] found id: ""
	I1210 06:42:01.079268  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:01.079333  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:01.083899  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:01.083984  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:01.118547  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:01.118566  213962 cri.go:89] found id: ""
	I1210 06:42:01.118574  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:01.118629  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:01.123083  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:01.123152  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:01.158862  213962 cri.go:89] found id: ""
	I1210 06:42:01.158889  213962 logs.go:282] 0 containers: []
	W1210 06:42:01.158899  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:01.158906  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:01.158969  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:01.224198  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:01.224217  213962 cri.go:89] found id: ""
	I1210 06:42:01.224226  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:01.224282  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:01.229040  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:01.229177  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:01.269443  213962 cri.go:89] found id: ""
	I1210 06:42:01.269524  213962 logs.go:282] 0 containers: []
	W1210 06:42:01.269546  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:01.269583  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:01.269685  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:01.301635  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:01.301727  213962 cri.go:89] found id: ""
	I1210 06:42:01.301751  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:01.301837  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:01.306308  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:01.306459  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:01.342073  213962 cri.go:89] found id: ""
	I1210 06:42:01.342153  213962 logs.go:282] 0 containers: []
	W1210 06:42:01.342195  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:01.342215  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:01.342304  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:01.372644  213962 cri.go:89] found id: ""
	I1210 06:42:01.372718  213962 logs.go:282] 0 containers: []
	W1210 06:42:01.372748  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:01.372800  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:01.372832  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:01.388915  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:01.388992  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:01.425312  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:01.425388  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:01.459144  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:01.459227  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:01.523710  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:01.523791  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:01.605028  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:01.605102  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:01.605146  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:01.637961  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:01.638045  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:01.680471  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:01.680554  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:01.714507  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:01.714540  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:04.245826  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:04.268358  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:04.268430  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:04.332457  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:04.332481  213962 cri.go:89] found id: ""
	I1210 06:42:04.332497  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:04.332562  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:04.339024  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:04.339102  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:04.389287  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:04.389312  213962 cri.go:89] found id: ""
	I1210 06:42:04.389321  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:04.389384  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:04.393499  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:04.393597  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:04.443238  213962 cri.go:89] found id: ""
	I1210 06:42:04.443273  213962 logs.go:282] 0 containers: []
	W1210 06:42:04.443282  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:04.443306  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:04.443389  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:04.490025  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:04.490055  213962 cri.go:89] found id: ""
	I1210 06:42:04.490064  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:04.490155  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:04.499410  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:04.499514  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:04.549797  213962 cri.go:89] found id: ""
	I1210 06:42:04.549830  213962 logs.go:282] 0 containers: []
	W1210 06:42:04.549840  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:04.549846  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:04.549911  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:04.609731  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:04.609760  213962 cri.go:89] found id: ""
	I1210 06:42:04.609769  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:04.609833  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:04.613376  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:04.613460  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:04.648313  213962 cri.go:89] found id: ""
	I1210 06:42:04.648339  213962 logs.go:282] 0 containers: []
	W1210 06:42:04.648347  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:04.648353  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:04.648421  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:04.688337  213962 cri.go:89] found id: ""
	I1210 06:42:04.688364  213962 logs.go:282] 0 containers: []
	W1210 06:42:04.688374  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:04.688400  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:04.688419  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:04.779070  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:04.779147  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:04.822330  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:04.822411  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:04.874790  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:04.874862  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:04.946579  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:04.946651  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:05.022752  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:05.022834  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:05.109122  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:05.109148  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:05.129793  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:05.129863  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:05.251104  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:05.251164  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:05.251191  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:07.799000  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:07.810016  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:07.810133  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:07.865736  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:07.865811  213962 cri.go:89] found id: ""
	I1210 06:42:07.865833  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:07.865923  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:07.873912  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:07.874033  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:07.905139  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:07.905211  213962 cri.go:89] found id: ""
	I1210 06:42:07.905233  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:07.905318  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:07.909281  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:07.909399  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:07.952310  213962 cri.go:89] found id: ""
	I1210 06:42:07.952387  213962 logs.go:282] 0 containers: []
	W1210 06:42:07.952410  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:07.952428  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:07.952521  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:08.001791  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:08.001875  213962 cri.go:89] found id: ""
	I1210 06:42:08.001897  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:08.002004  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:08.010168  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:08.010290  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:08.049573  213962 cri.go:89] found id: ""
	I1210 06:42:08.049643  213962 logs.go:282] 0 containers: []
	W1210 06:42:08.049676  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:08.049695  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:08.049798  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:08.092828  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:08.092905  213962 cri.go:89] found id: ""
	I1210 06:42:08.092929  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:08.093013  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:08.103418  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:08.103536  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:08.149259  213962 cri.go:89] found id: ""
	I1210 06:42:08.149335  213962 logs.go:282] 0 containers: []
	W1210 06:42:08.149356  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:08.149374  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:08.149462  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:08.189757  213962 cri.go:89] found id: ""
	I1210 06:42:08.189822  213962 logs.go:282] 0 containers: []
	W1210 06:42:08.189844  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:08.189869  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:08.189906  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:08.275116  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:08.275194  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:08.301924  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:08.301949  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:08.387487  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:08.395159  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:08.451372  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:08.451454  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:08.577031  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:08.577048  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:08.577060  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:08.645779  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:08.645867  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:08.692551  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:08.692623  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:08.759251  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:08.759333  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:11.305243  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:11.315831  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:11.315906  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:11.343186  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:11.343205  213962 cri.go:89] found id: ""
	I1210 06:42:11.343221  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:11.343280  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:11.346848  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:11.346916  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:11.371994  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:11.372017  213962 cri.go:89] found id: ""
	I1210 06:42:11.372025  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:11.372083  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:11.375732  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:11.375801  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:11.402874  213962 cri.go:89] found id: ""
	I1210 06:42:11.402896  213962 logs.go:282] 0 containers: []
	W1210 06:42:11.402904  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:11.402911  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:11.402975  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:11.433592  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:11.433612  213962 cri.go:89] found id: ""
	I1210 06:42:11.433620  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:11.433675  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:11.437702  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:11.437772  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:11.483426  213962 cri.go:89] found id: ""
	I1210 06:42:11.483447  213962 logs.go:282] 0 containers: []
	W1210 06:42:11.483455  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:11.483462  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:11.483521  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:11.517076  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:11.517097  213962 cri.go:89] found id: ""
	I1210 06:42:11.517106  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:11.517161  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:11.521391  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:11.521459  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:11.550162  213962 cri.go:89] found id: ""
	I1210 06:42:11.550183  213962 logs.go:282] 0 containers: []
	W1210 06:42:11.550191  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:11.550198  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:11.550256  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:11.588413  213962 cri.go:89] found id: ""
	I1210 06:42:11.588434  213962 logs.go:282] 0 containers: []
	W1210 06:42:11.588443  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:11.588456  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:11.588467  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:11.679821  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:11.679839  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:11.679851  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:11.748761  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:11.748858  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:11.799231  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:11.799308  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:11.859681  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:11.859722  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:11.916289  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:11.916321  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:11.963473  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:11.963547  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:12.013179  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:12.013263  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:12.081706  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:12.081780  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:14.599143  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:14.610605  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:14.610687  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:14.650312  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:14.650330  213962 cri.go:89] found id: ""
	I1210 06:42:14.650405  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:14.650460  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:14.655197  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:14.655269  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:14.688167  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:14.688188  213962 cri.go:89] found id: ""
	I1210 06:42:14.688196  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:14.688250  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:14.692093  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:14.692175  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:14.721291  213962 cri.go:89] found id: ""
	I1210 06:42:14.721315  213962 logs.go:282] 0 containers: []
	W1210 06:42:14.721324  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:14.721330  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:14.721397  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:14.749306  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:14.749327  213962 cri.go:89] found id: ""
	I1210 06:42:14.749335  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:14.749395  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:14.756616  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:14.756690  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:14.782439  213962 cri.go:89] found id: ""
	I1210 06:42:14.782464  213962 logs.go:282] 0 containers: []
	W1210 06:42:14.782473  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:14.782480  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:14.782538  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:14.812878  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:14.812898  213962 cri.go:89] found id: ""
	I1210 06:42:14.812913  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:14.812970  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:14.820405  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:14.820467  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:14.856012  213962 cri.go:89] found id: ""
	I1210 06:42:14.856082  213962 logs.go:282] 0 containers: []
	W1210 06:42:14.856109  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:14.856126  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:14.856219  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:14.889436  213962 cri.go:89] found id: ""
	I1210 06:42:14.889507  213962 logs.go:282] 0 containers: []
	W1210 06:42:14.889529  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:14.889558  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:14.889597  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:14.919737  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:14.919811  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:14.954534  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:14.954603  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:15.019427  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:15.019525  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:15.071913  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:15.071996  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:15.144516  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:15.144556  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:15.181226  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:15.181259  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:15.242078  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:15.242120  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:15.255449  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:15.255476  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:15.318621  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:17.819128  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:17.830371  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:17.830442  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:17.860391  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:17.860413  213962 cri.go:89] found id: ""
	I1210 06:42:17.860421  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:17.860475  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:17.867368  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:17.867442  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:17.909314  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:17.909332  213962 cri.go:89] found id: ""
	I1210 06:42:17.909340  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:17.909392  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:17.913542  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:17.913606  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:17.946131  213962 cri.go:89] found id: ""
	I1210 06:42:17.946157  213962 logs.go:282] 0 containers: []
	W1210 06:42:17.946166  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:17.946172  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:17.946226  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:17.978964  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:17.978984  213962 cri.go:89] found id: ""
	I1210 06:42:17.978993  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:17.979079  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:17.983977  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:17.984050  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:18.014566  213962 cri.go:89] found id: ""
	I1210 06:42:18.014594  213962 logs.go:282] 0 containers: []
	W1210 06:42:18.014604  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:18.014611  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:18.014676  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:18.049049  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:18.049073  213962 cri.go:89] found id: ""
	I1210 06:42:18.049082  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:18.049142  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:18.053993  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:18.054069  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:18.090287  213962 cri.go:89] found id: ""
	I1210 06:42:18.090315  213962 logs.go:282] 0 containers: []
	W1210 06:42:18.090325  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:18.090345  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:18.090408  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:18.121712  213962 cri.go:89] found id: ""
	I1210 06:42:18.121733  213962 logs.go:282] 0 containers: []
	W1210 06:42:18.121741  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:18.121756  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:18.121784  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:18.156458  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:18.156534  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:18.217295  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:18.217451  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:18.297681  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:18.297713  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:18.388094  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:18.388117  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:18.388131  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:18.428033  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:18.428061  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:18.469734  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:18.469760  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:18.519289  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:18.519316  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:18.532606  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:18.532635  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:21.072997  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:21.087575  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:21.087646  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:21.125140  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:21.125155  213962 cri.go:89] found id: ""
	I1210 06:42:21.125163  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:21.125215  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:21.128876  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:21.128954  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:21.156836  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:21.156854  213962 cri.go:89] found id: ""
	I1210 06:42:21.156863  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:21.156912  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:21.171971  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:21.172042  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:21.243195  213962 cri.go:89] found id: ""
	I1210 06:42:21.243216  213962 logs.go:282] 0 containers: []
	W1210 06:42:21.243224  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:21.243230  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:21.243285  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:21.284014  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:21.284029  213962 cri.go:89] found id: ""
	I1210 06:42:21.284037  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:21.284093  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:21.288178  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:21.288249  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:21.317528  213962 cri.go:89] found id: ""
	I1210 06:42:21.317548  213962 logs.go:282] 0 containers: []
	W1210 06:42:21.317557  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:21.317564  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:21.317636  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:21.351323  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:21.351341  213962 cri.go:89] found id: ""
	I1210 06:42:21.351349  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:21.351399  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:21.355237  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:21.355313  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:21.386768  213962 cri.go:89] found id: ""
	I1210 06:42:21.386789  213962 logs.go:282] 0 containers: []
	W1210 06:42:21.386797  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:21.386804  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:21.386862  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:21.422391  213962 cri.go:89] found id: ""
	I1210 06:42:21.422413  213962 logs.go:282] 0 containers: []
	W1210 06:42:21.422421  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:21.422436  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:21.422446  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:21.459954  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:21.459983  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:21.500592  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:21.500625  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:21.565479  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:21.565515  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:21.579435  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:21.579463  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:21.623584  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:21.623614  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:21.659077  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:21.659121  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:21.690258  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:21.690285  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:21.766231  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:21.766247  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:21.766259  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:24.307600  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:24.319757  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:24.319828  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:24.358285  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:24.358311  213962 cri.go:89] found id: ""
	I1210 06:42:24.358321  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:24.358395  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:24.366235  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:24.366318  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:24.407562  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:24.407586  213962 cri.go:89] found id: ""
	I1210 06:42:24.407600  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:24.407656  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:24.411332  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:24.411414  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:24.442897  213962 cri.go:89] found id: ""
	I1210 06:42:24.442918  213962 logs.go:282] 0 containers: []
	W1210 06:42:24.442933  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:24.442941  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:24.443002  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:24.473869  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:24.473891  213962 cri.go:89] found id: ""
	I1210 06:42:24.473900  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:24.473958  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:24.478066  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:24.478142  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:24.506859  213962 cri.go:89] found id: ""
	I1210 06:42:24.506883  213962 logs.go:282] 0 containers: []
	W1210 06:42:24.506894  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:24.506900  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:24.506956  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:24.534541  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:24.534566  213962 cri.go:89] found id: ""
	I1210 06:42:24.534575  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:24.534630  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:24.538056  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:24.538126  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:24.571528  213962 cri.go:89] found id: ""
	I1210 06:42:24.571559  213962 logs.go:282] 0 containers: []
	W1210 06:42:24.571569  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:24.571575  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:24.571637  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:24.599447  213962 cri.go:89] found id: ""
	I1210 06:42:24.599473  213962 logs.go:282] 0 containers: []
	W1210 06:42:24.599482  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:24.599494  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:24.599505  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:24.662728  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:24.662764  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:24.678084  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:24.678116  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:24.768921  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:24.768940  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:24.768951  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:24.822667  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:24.822712  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:24.862136  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:24.862178  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:24.895929  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:24.895958  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:24.950500  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:24.950528  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:24.990954  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:24.990987  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:27.543612  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:27.554503  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:27.554571  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:27.584179  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:27.584201  213962 cri.go:89] found id: ""
	I1210 06:42:27.584209  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:27.584269  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:27.587950  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:27.588023  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:27.617788  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:27.617809  213962 cri.go:89] found id: ""
	I1210 06:42:27.617818  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:27.617872  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:27.621734  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:27.621811  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:27.648793  213962 cri.go:89] found id: ""
	I1210 06:42:27.648817  213962 logs.go:282] 0 containers: []
	W1210 06:42:27.648825  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:27.648831  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:27.648886  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:27.703539  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:27.703573  213962 cri.go:89] found id: ""
	I1210 06:42:27.703581  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:27.703641  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:27.707302  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:27.707390  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:27.747908  213962 cri.go:89] found id: ""
	I1210 06:42:27.747929  213962 logs.go:282] 0 containers: []
	W1210 06:42:27.747937  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:27.747944  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:27.747998  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:27.800766  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:27.800784  213962 cri.go:89] found id: ""
	I1210 06:42:27.800792  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:27.800846  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:27.804313  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:27.804381  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:27.843073  213962 cri.go:89] found id: ""
	I1210 06:42:27.843093  213962 logs.go:282] 0 containers: []
	W1210 06:42:27.843102  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:27.843112  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:27.843170  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:27.879932  213962 cri.go:89] found id: ""
	I1210 06:42:27.880084  213962 logs.go:282] 0 containers: []
	W1210 06:42:27.880096  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:27.880115  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:27.880134  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:28.015352  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:28.015371  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:28.015388  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:28.065142  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:28.065168  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:28.105858  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:28.105892  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:28.172074  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:28.172098  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:28.272032  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:28.272067  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:28.340080  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:28.340131  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:28.391489  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:28.391520  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:28.431303  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:28.431335  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:30.947100  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:30.957683  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:30.957753  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:30.983147  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:30.983168  213962 cri.go:89] found id: ""
	I1210 06:42:30.983177  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:30.983236  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:30.986820  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:30.986887  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:31.028038  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:31.028060  213962 cri.go:89] found id: ""
	I1210 06:42:31.028069  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:31.028124  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:31.031874  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:31.031944  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:31.056773  213962 cri.go:89] found id: ""
	I1210 06:42:31.056799  213962 logs.go:282] 0 containers: []
	W1210 06:42:31.056808  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:31.056814  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:31.056872  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:31.085342  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:31.085402  213962 cri.go:89] found id: ""
	I1210 06:42:31.085412  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:31.085465  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:31.089110  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:31.089181  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:31.115545  213962 cri.go:89] found id: ""
	I1210 06:42:31.115569  213962 logs.go:282] 0 containers: []
	W1210 06:42:31.115578  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:31.115584  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:31.115649  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:31.142438  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:31.142459  213962 cri.go:89] found id: ""
	I1210 06:42:31.142467  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:31.142526  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:31.146307  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:31.146386  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:31.183858  213962 cri.go:89] found id: ""
	I1210 06:42:31.183883  213962 logs.go:282] 0 containers: []
	W1210 06:42:31.183894  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:31.183900  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:31.183959  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:31.209629  213962 cri.go:89] found id: ""
	I1210 06:42:31.209654  213962 logs.go:282] 0 containers: []
	W1210 06:42:31.209663  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:31.209676  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:31.209687  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:31.274743  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:31.274765  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:31.274777  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:31.313845  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:31.313875  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:31.350284  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:31.350314  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:31.379674  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:31.379702  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:31.436940  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:31.436975  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:31.450293  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:31.450328  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:31.483088  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:31.483123  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:31.509399  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:31.509428  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:34.042896  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:34.054746  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:34.054811  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:34.082562  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:34.082581  213962 cri.go:89] found id: ""
	I1210 06:42:34.082589  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:34.082645  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:34.088812  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:34.088881  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:34.117483  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:34.117501  213962 cri.go:89] found id: ""
	I1210 06:42:34.117510  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:34.117565  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:34.121656  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:34.121722  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:34.150582  213962 cri.go:89] found id: ""
	I1210 06:42:34.150603  213962 logs.go:282] 0 containers: []
	W1210 06:42:34.150612  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:34.150618  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:34.150677  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:34.204191  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:34.204208  213962 cri.go:89] found id: ""
	I1210 06:42:34.204216  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:34.204267  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:34.212001  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:34.212068  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:34.253403  213962 cri.go:89] found id: ""
	I1210 06:42:34.253425  213962 logs.go:282] 0 containers: []
	W1210 06:42:34.253433  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:34.253439  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:34.253494  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:34.297466  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:34.297540  213962 cri.go:89] found id: ""
	I1210 06:42:34.297563  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:34.297649  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:34.308645  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:34.308758  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:34.352556  213962 cri.go:89] found id: ""
	I1210 06:42:34.352629  213962 logs.go:282] 0 containers: []
	W1210 06:42:34.352651  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:34.352669  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:34.352757  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:34.391438  213962 cri.go:89] found id: ""
	I1210 06:42:34.391505  213962 logs.go:282] 0 containers: []
	W1210 06:42:34.391530  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:34.391561  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:34.391605  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:34.435720  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:34.435903  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:34.470561  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:34.470591  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:34.511237  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:34.511317  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:34.581136  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:34.581201  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:34.613866  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:34.613900  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:34.645125  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:34.645156  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:34.660464  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:34.660491  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:34.742102  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:34.742124  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:34.742136  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:37.289805  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:37.301987  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:37.302056  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:37.335769  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:37.335790  213962 cri.go:89] found id: ""
	I1210 06:42:37.335799  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:37.335854  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:37.340075  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:37.340161  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:37.367486  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:37.367508  213962 cri.go:89] found id: ""
	I1210 06:42:37.367517  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:37.367572  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:37.371667  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:37.371736  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:37.399946  213962 cri.go:89] found id: ""
	I1210 06:42:37.399968  213962 logs.go:282] 0 containers: []
	W1210 06:42:37.399976  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:37.399982  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:37.400042  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:37.440008  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:37.440081  213962 cri.go:89] found id: ""
	I1210 06:42:37.440115  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:37.440200  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:37.447647  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:37.447716  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:37.492942  213962 cri.go:89] found id: ""
	I1210 06:42:37.493007  213962 logs.go:282] 0 containers: []
	W1210 06:42:37.493032  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:37.493052  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:37.493137  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:37.542142  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:37.542216  213962 cri.go:89] found id: ""
	I1210 06:42:37.542237  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:37.542319  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:37.548057  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:37.548141  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:37.595535  213962 cri.go:89] found id: ""
	I1210 06:42:37.595571  213962 logs.go:282] 0 containers: []
	W1210 06:42:37.595589  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:37.595596  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:37.595686  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:37.633615  213962 cri.go:89] found id: ""
	I1210 06:42:37.633646  213962 logs.go:282] 0 containers: []
	W1210 06:42:37.633655  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:37.633669  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:37.633685  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:37.672024  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:37.672058  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:37.728594  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:37.728629  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:37.767875  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:37.767905  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:37.799713  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:37.799748  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:37.835165  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:37.835241  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:37.901827  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:37.901860  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:37.921067  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:37.921091  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:38.005135  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:38.005161  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:38.005176  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:40.539136  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:40.549843  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:42:40.549952  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:42:40.579359  213962 cri.go:89] found id: "7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:40.579422  213962 cri.go:89] found id: ""
	I1210 06:42:40.579444  213962 logs.go:282] 1 containers: [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d]
	I1210 06:42:40.579515  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:40.583663  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:42:40.583744  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:42:40.610959  213962 cri.go:89] found id: "7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:40.610981  213962 cri.go:89] found id: ""
	I1210 06:42:40.610989  213962 logs.go:282] 1 containers: [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0]
	I1210 06:42:40.611094  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:40.614947  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:42:40.615039  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:42:40.649972  213962 cri.go:89] found id: ""
	I1210 06:42:40.649998  213962 logs.go:282] 0 containers: []
	W1210 06:42:40.650007  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:42:40.650013  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:42:40.650073  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:42:40.682881  213962 cri.go:89] found id: "3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:40.682904  213962 cri.go:89] found id: ""
	I1210 06:42:40.682913  213962 logs.go:282] 1 containers: [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988]
	I1210 06:42:40.682968  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:40.687144  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:42:40.687218  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:42:40.713845  213962 cri.go:89] found id: ""
	I1210 06:42:40.713870  213962 logs.go:282] 0 containers: []
	W1210 06:42:40.713879  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:42:40.713885  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:42:40.713941  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:42:40.739024  213962 cri.go:89] found id: "d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:40.739047  213962 cri.go:89] found id: ""
	I1210 06:42:40.739055  213962 logs.go:282] 1 containers: [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15]
	I1210 06:42:40.739114  213962 ssh_runner.go:195] Run: which crictl
	I1210 06:42:40.743389  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:42:40.743461  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:42:40.786740  213962 cri.go:89] found id: ""
	I1210 06:42:40.786765  213962 logs.go:282] 0 containers: []
	W1210 06:42:40.786783  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:42:40.786790  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:42:40.786848  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:42:40.812666  213962 cri.go:89] found id: ""
	I1210 06:42:40.812691  213962 logs.go:282] 0 containers: []
	W1210 06:42:40.812700  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:42:40.812713  213962 logs.go:123] Gathering logs for kube-apiserver [7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d] ...
	I1210 06:42:40.812728  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d"
	I1210 06:42:40.856310  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:42:40.856347  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:42:40.887275  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:42:40.887304  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:42:40.967251  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:42:40.967272  213962 logs.go:123] Gathering logs for etcd [7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0] ...
	I1210 06:42:40.967285  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0"
	I1210 06:42:41.006395  213962 logs.go:123] Gathering logs for kube-scheduler [3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988] ...
	I1210 06:42:41.006427  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988"
	I1210 06:42:41.036072  213962 logs.go:123] Gathering logs for kube-controller-manager [d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15] ...
	I1210 06:42:41.036104  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15"
	I1210 06:42:41.076654  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:42:41.076684  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:42:41.111814  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:42:41.111891  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:42:41.176248  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:42:41.176321  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:42:43.691136  213962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:42:43.701667  213962 kubeadm.go:602] duration metric: took 4m4.175652246s to restartPrimaryControlPlane
	W1210 06:42:43.701741  213962 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:42:43.701801  213962 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:42:44.216971  213962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:42:44.234792  213962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:42:44.246135  213962 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:42:44.246197  213962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:42:44.257295  213962 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:42:44.257315  213962 kubeadm.go:158] found existing configuration files:
	
	I1210 06:42:44.257370  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:42:44.268330  213962 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:42:44.268397  213962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:42:44.276711  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:42:44.288400  213962 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:42:44.288467  213962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:42:44.296417  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:42:44.305960  213962 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:42:44.306021  213962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:42:44.316129  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:42:44.324797  213962 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:42:44.324902  213962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:42:44.332201  213962 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:42:44.383534  213962 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:42:44.383805  213962 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:42:44.480017  213962 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:42:44.480121  213962 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:42:44.480186  213962 kubeadm.go:319] OS: Linux
	I1210 06:42:44.480259  213962 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:42:44.480334  213962 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:42:44.480408  213962 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:42:44.480479  213962 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:42:44.480549  213962 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:42:44.480621  213962 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:42:44.480690  213962 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:42:44.480774  213962 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:42:44.480842  213962 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:42:44.560284  213962 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:42:44.560432  213962 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:42:44.560552  213962 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:42:44.570308  213962 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:42:44.575672  213962 out.go:252]   - Generating certificates and keys ...
	I1210 06:42:44.575804  213962 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:42:44.575894  213962 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:42:44.576006  213962 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:42:44.576376  213962 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:42:44.576990  213962 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:42:44.577528  213962 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:42:44.578067  213962 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:42:44.578575  213962 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:42:44.579092  213962 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:42:44.579591  213962 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:42:44.580022  213962 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:42:44.580205  213962 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:42:44.771246  213962 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:42:44.814201  213962 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:42:45.049367  213962 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:42:45.497014  213962 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:42:45.833525  213962 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:42:45.834191  213962 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:42:45.837089  213962 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:42:45.840797  213962 out.go:252]   - Booting up control plane ...
	I1210 06:42:45.840908  213962 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:42:45.840986  213962 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:42:45.841052  213962 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:42:45.865407  213962 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:42:45.865667  213962 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:42:45.876346  213962 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:42:45.877161  213962 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:42:45.877210  213962 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:42:46.443616  213962 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:42:46.443736  213962 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:46:46.431685  213962 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001336447s
	I1210 06:46:46.431718  213962 kubeadm.go:319] 
	I1210 06:46:46.431775  213962 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:46:46.431817  213962 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:46:46.431922  213962 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:46:46.431929  213962 kubeadm.go:319] 
	I1210 06:46:46.432033  213962 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:46:46.432065  213962 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:46:46.432096  213962 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:46:46.432100  213962 kubeadm.go:319] 
	I1210 06:46:46.436580  213962 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:46:46.437004  213962 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:46:46.437108  213962 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:46:46.437330  213962 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:46:46.437335  213962 kubeadm.go:319] 
	I1210 06:46:46.437400  213962 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:46:46.437500  213962 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001336447s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001336447s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:46:46.437572  213962 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:46:46.850130  213962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:46:46.863737  213962 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:46:46.863806  213962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:46:46.872001  213962 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:46:46.872021  213962 kubeadm.go:158] found existing configuration files:
	
	I1210 06:46:46.872075  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:46:46.880014  213962 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:46:46.880084  213962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:46:46.890507  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:46:46.898773  213962 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:46:46.898841  213962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:46:46.906617  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:46:46.914768  213962 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:46:46.914838  213962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:46:46.922861  213962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:46:46.931168  213962 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:46:46.931234  213962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:46:46.939466  213962 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:46:46.978629  213962 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:46:46.978724  213962 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:46:47.065305  213962 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:46:47.065386  213962 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:46:47.065422  213962 kubeadm.go:319] OS: Linux
	I1210 06:46:47.065467  213962 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:46:47.065515  213962 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:46:47.065562  213962 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:46:47.065610  213962 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:46:47.065658  213962 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:46:47.065708  213962 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:46:47.065754  213962 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:46:47.065802  213962 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:46:47.065849  213962 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:46:47.134443  213962 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:46:47.134554  213962 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:46:47.134644  213962 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:46:47.143517  213962 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:46:47.147004  213962 out.go:252]   - Generating certificates and keys ...
	I1210 06:46:47.147137  213962 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:46:47.147217  213962 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:46:47.147332  213962 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:46:47.147403  213962 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:46:47.147486  213962 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:46:47.147565  213962 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:46:47.147637  213962 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:46:47.147700  213962 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:46:47.147775  213962 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:46:47.147849  213962 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:46:47.147887  213962 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:46:47.147947  213962 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:46:47.623660  213962 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:46:47.738579  213962 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:46:48.041247  213962 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:46:48.261142  213962 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:46:48.454865  213962 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:46:48.456349  213962 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:46:48.459850  213962 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:46:48.463103  213962 out.go:252]   - Booting up control plane ...
	I1210 06:46:48.463228  213962 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:46:48.467739  213962 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:46:48.469459  213962 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:46:48.501463  213962 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:46:48.501575  213962 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:46:48.511306  213962 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:46:48.511726  213962 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:46:48.511820  213962 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:46:48.711885  213962 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:46:48.712007  213962 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:50:48.709787  213962 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00113931s
	I1210 06:50:48.709827  213962 kubeadm.go:319] 
	I1210 06:50:48.709894  213962 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:50:48.709931  213962 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:50:48.710036  213962 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:50:48.710050  213962 kubeadm.go:319] 
	I1210 06:50:48.710186  213962 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:50:48.710242  213962 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:50:48.710292  213962 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:50:48.710302  213962 kubeadm.go:319] 
	I1210 06:50:48.714180  213962 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:50:48.714624  213962 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:50:48.714738  213962 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:50:48.714978  213962 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:50:48.714985  213962 kubeadm.go:319] 
	I1210 06:50:48.715093  213962 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:50:48.715151  213962 kubeadm.go:403] duration metric: took 12m9.305275057s to StartCluster
	I1210 06:50:48.715187  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:48.715247  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:48.740016  213962 cri.go:89] found id: ""
	I1210 06:50:48.740041  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.740050  213962 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:50:48.740058  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:50:48.740121  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:48.764372  213962 cri.go:89] found id: ""
	I1210 06:50:48.764396  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.764405  213962 logs.go:284] No container was found matching "etcd"
	I1210 06:50:48.764416  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:50:48.764473  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:48.789026  213962 cri.go:89] found id: ""
	I1210 06:50:48.789050  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.789059  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:50:48.789065  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:48.789122  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:48.818825  213962 cri.go:89] found id: ""
	I1210 06:50:48.818856  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.818872  213962 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:50:48.818882  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:48.818965  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:48.849712  213962 cri.go:89] found id: ""
	I1210 06:50:48.849790  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.849813  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:48.849833  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:48.849922  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:48.878656  213962 cri.go:89] found id: ""
	I1210 06:50:48.878680  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.878689  213962 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:50:48.878695  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:48.878753  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:48.904300  213962 cri.go:89] found id: ""
	I1210 06:50:48.904378  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.904402  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:48.904415  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:48.904479  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:48.943812  213962 cri.go:89] found id: ""
	I1210 06:50:48.943838  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.943846  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:48.943855  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:50:48.943867  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:49.015789  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:49.015816  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:49.074369  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:49.074403  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:49.087758  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:49.087786  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:49.154912  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:49.154931  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:50:49.154943  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 06:50:49.194492  213962 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00113931s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:50:49.194552  213962 out.go:285] * 
	* 
	W1210 06:50:49.194609  213962 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00113931s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00113931s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:49.194624  213962 out.go:285] * 
	* 
	W1210 06:50:49.196734  213962 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:50:49.202541  213962 out.go:203] 
	W1210 06:50:49.206178  213962 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00113931s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00113931s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:49.206236  213962 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:50:49.206258  213962 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:50:49.210058  213962 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-712093 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-712093 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-712093 version --output=json: exit status 1 (72.966057ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-10 06:50:49.887539626 +0000 UTC m=+4914.029456237
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-712093
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-712093:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3eb75bb3e93e1f5d6c9090268e97121b4dc0f191ab5c685634f00641e4a9d9cd",
	        "Created": "2025-12-10T06:37:46.913847366Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:38:17.866285858Z",
	            "FinishedAt": "2025-12-10T06:38:16.534952951Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/3eb75bb3e93e1f5d6c9090268e97121b4dc0f191ab5c685634f00641e4a9d9cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3eb75bb3e93e1f5d6c9090268e97121b4dc0f191ab5c685634f00641e4a9d9cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/3eb75bb3e93e1f5d6c9090268e97121b4dc0f191ab5c685634f00641e4a9d9cd/hosts",
	        "LogPath": "/var/lib/docker/containers/3eb75bb3e93e1f5d6c9090268e97121b4dc0f191ab5c685634f00641e4a9d9cd/3eb75bb3e93e1f5d6c9090268e97121b4dc0f191ab5c685634f00641e4a9d9cd-json.log",
	        "Name": "/kubernetes-upgrade-712093",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-712093:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-712093",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3eb75bb3e93e1f5d6c9090268e97121b4dc0f191ab5c685634f00641e4a9d9cd",
	                "LowerDir": "/var/lib/docker/overlay2/bb934718fc515e4d9b640a9481ef485bd2cb96660135b1f45c7b64c76931da2d-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb934718fc515e4d9b640a9481ef485bd2cb96660135b1f45c7b64c76931da2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb934718fc515e4d9b640a9481ef485bd2cb96660135b1f45c7b64c76931da2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb934718fc515e4d9b640a9481ef485bd2cb96660135b1f45c7b64c76931da2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-712093",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-712093/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-712093",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-712093",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-712093",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cda7d7ef28e635f1b3854a2ad73b5edb4457a91d27c3be65692a589e0a15f3d1",
	            "SandboxKey": "/var/run/docker/netns/cda7d7ef28e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-712093": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:77:2a:79:23:f7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e7447d71e340ee32be43d7e4258b76f779752259ca5665c8419ac35a78135f2f",
	                    "EndpointID": "5cb576b8967ad68698bfa55fb9a571dcec66aa1933d31a950659632dce55285e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-712093",
	                        "3eb75bb3e93e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-712093 -n kubernetes-upgrade-712093
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-712093 -n kubernetes-upgrade-712093: exit status 2 (311.31884ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-712093 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-225109 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-225109            │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │                     │
	│ ssh     │ -p cilium-225109 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-225109            │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │                     │
	│ ssh     │ -p cilium-225109 sudo crio config                                                                                                                                                                                                                   │ cilium-225109            │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │                     │
	│ delete  │ -p cilium-225109                                                                                                                                                                                                                                    │ cilium-225109            │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:42 UTC │
	│ start   │ -p force-systemd-env-099835 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-099835 │ jenkins │ v1.37.0 │ 10 Dec 25 06:42 UTC │ 10 Dec 25 06:43 UTC │
	│ ssh     │ force-systemd-env-099835 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-099835 │ jenkins │ v1.37.0 │ 10 Dec 25 06:43 UTC │ 10 Dec 25 06:43 UTC │
	│ delete  │ -p force-systemd-env-099835                                                                                                                                                                                                                         │ force-systemd-env-099835 │ jenkins │ v1.37.0 │ 10 Dec 25 06:43 UTC │ 10 Dec 25 06:43 UTC │
	│ start   │ -p cert-expiration-734005 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-734005   │ jenkins │ v1.37.0 │ 10 Dec 25 06:43 UTC │ 10 Dec 25 06:43 UTC │
	│ start   │ -p cert-expiration-734005 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-734005   │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ delete  │ -p cert-expiration-734005                                                                                                                                                                                                                           │ cert-expiration-734005   │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:46 UTC │
	│ start   │ -p cert-options-646610 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-646610      │ jenkins │ v1.37.0 │ 10 Dec 25 06:46 UTC │ 10 Dec 25 06:47 UTC │
	│ ssh     │ cert-options-646610 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-646610      │ jenkins │ v1.37.0 │ 10 Dec 25 06:47 UTC │ 10 Dec 25 06:47 UTC │
	│ ssh     │ -p cert-options-646610 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-646610      │ jenkins │ v1.37.0 │ 10 Dec 25 06:47 UTC │ 10 Dec 25 06:47 UTC │
	│ delete  │ -p cert-options-646610                                                                                                                                                                                                                              │ cert-options-646610      │ jenkins │ v1.37.0 │ 10 Dec 25 06:47 UTC │ 10 Dec 25 06:47 UTC │
	│ start   │ -p old-k8s-version-806899 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-806899   │ jenkins │ v1.37.0 │ 10 Dec 25 06:47 UTC │ 10 Dec 25 06:48 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-806899 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-806899   │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:48 UTC │
	│ stop    │ -p old-k8s-version-806899 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-806899   │ jenkins │ v1.37.0 │ 10 Dec 25 06:48 UTC │ 10 Dec 25 06:49 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-806899 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-806899   │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ start   │ -p old-k8s-version-806899 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-806899   │ jenkins │ v1.37.0 │ 10 Dec 25 06:49 UTC │ 10 Dec 25 06:49 UTC │
	│ image   │ old-k8s-version-806899 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-806899   │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ pause   │ -p old-k8s-version-806899 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-806899   │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ unpause │ -p old-k8s-version-806899 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-806899   │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ delete  │ -p old-k8s-version-806899                                                                                                                                                                                                                           │ old-k8s-version-806899   │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ delete  │ -p old-k8s-version-806899                                                                                                                                                                                                                           │ old-k8s-version-806899   │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ start   │ -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-320236        │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:50:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:50:10.357147  266079 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:50:10.357356  266079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:50:10.357370  266079 out.go:374] Setting ErrFile to fd 2...
	I1210 06:50:10.357380  266079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:50:10.357776  266079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:50:10.358400  266079 out.go:368] Setting JSON to false
	I1210 06:50:10.359701  266079 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5561,"bootTime":1765343850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:50:10.359798  266079 start.go:143] virtualization:  
	I1210 06:50:10.364294  266079 out.go:179] * [no-preload-320236] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:50:10.367764  266079 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:50:10.367865  266079 notify.go:221] Checking for updates...
	I1210 06:50:10.374238  266079 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:50:10.377475  266079 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:50:10.380636  266079 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:50:10.383721  266079 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:50:10.386768  266079 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:50:10.390361  266079 config.go:182] Loaded profile config "kubernetes-upgrade-712093": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:50:10.390506  266079 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:50:10.422431  266079 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:50:10.422564  266079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:50:10.506194  266079 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:50:10.497144758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:50:10.506309  266079 docker.go:319] overlay module found
	I1210 06:50:10.509706  266079 out.go:179] * Using the docker driver based on user configuration
	I1210 06:50:10.512587  266079 start.go:309] selected driver: docker
	I1210 06:50:10.512607  266079 start.go:927] validating driver "docker" against <nil>
	I1210 06:50:10.512620  266079 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:50:10.513349  266079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:50:10.566371  266079 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:50:10.557369535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:50:10.566528  266079 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:50:10.566766  266079 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:50:10.569717  266079 out.go:179] * Using Docker driver with root privileges
	I1210 06:50:10.572601  266079 cni.go:84] Creating CNI manager for ""
	I1210 06:50:10.572677  266079 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:50:10.572690  266079 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:50:10.572783  266079 start.go:353] cluster config:
	{Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:50:10.576099  266079 out.go:179] * Starting "no-preload-320236" primary control-plane node in "no-preload-320236" cluster
	I1210 06:50:10.579092  266079 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:50:10.582084  266079 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:50:10.585063  266079 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:50:10.585141  266079 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:50:10.585200  266079 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json ...
	I1210 06:50:10.585229  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json: {Name:mk4bf1092818b21dd1d254a18e84a5343bc61afd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:10.585498  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:10.619049  266079 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:50:10.619077  266079 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 06:50:10.619102  266079 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:50:10.619132  266079 start.go:360] acquireMachinesLock for no-preload-320236: {Name:mk4a67a43519a7e8fad4432e15b5aa1fee295390 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:10.619254  266079 start.go:364] duration metric: took 105.872µs to acquireMachinesLock for "no-preload-320236"
	I1210 06:50:10.619279  266079 start.go:93] Provisioning new machine with config: &{Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:50:10.619371  266079 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:50:10.622717  266079 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:50:10.622957  266079 start.go:159] libmachine.API.Create for "no-preload-320236" (driver="docker")
	I1210 06:50:10.622978  266079 client.go:173] LocalClient.Create starting
	I1210 06:50:10.623149  266079 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem
	I1210 06:50:10.623213  266079 main.go:143] libmachine: Decoding PEM data...
	I1210 06:50:10.623236  266079 main.go:143] libmachine: Parsing certificate...
	I1210 06:50:10.623294  266079 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem
	I1210 06:50:10.623311  266079 main.go:143] libmachine: Decoding PEM data...
	I1210 06:50:10.623322  266079 main.go:143] libmachine: Parsing certificate...
	I1210 06:50:10.623746  266079 cli_runner.go:164] Run: docker network inspect no-preload-320236 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:50:10.651463  266079 cli_runner.go:211] docker network inspect no-preload-320236 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:50:10.651550  266079 network_create.go:284] running [docker network inspect no-preload-320236] to gather additional debugging logs...
	I1210 06:50:10.651574  266079 cli_runner.go:164] Run: docker network inspect no-preload-320236
	W1210 06:50:10.677622  266079 cli_runner.go:211] docker network inspect no-preload-320236 returned with exit code 1
	I1210 06:50:10.677654  266079 network_create.go:287] error running [docker network inspect no-preload-320236]: docker network inspect no-preload-320236: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-320236 not found
	I1210 06:50:10.677669  266079 network_create.go:289] output of [docker network inspect no-preload-320236]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-320236 not found
	
	** /stderr **
	I1210 06:50:10.677769  266079 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:50:10.695532  266079 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d091f932c27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:e7:11:3f:d3:8a} reservation:<nil>}
	I1210 06:50:10.695813  266079 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6dea00dc193 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:ca:73:41:95:cc} reservation:<nil>}
	I1210 06:50:10.696103  266079 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ab13536e79d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:47:e5:9e:c6:4a} reservation:<nil>}
	I1210 06:50:10.696399  266079 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e7447d71e340 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:a2:98:b0:0d:26} reservation:<nil>}
	I1210 06:50:10.696761  266079 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a498f0}
	I1210 06:50:10.696786  266079 network_create.go:124] attempt to create docker network no-preload-320236 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 06:50:10.696847  266079 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-320236 no-preload-320236
	I1210 06:50:10.757486  266079 network_create.go:108] docker network no-preload-320236 192.168.85.0/24 created
	I1210 06:50:10.757517  266079 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-320236" container
	I1210 06:50:10.757591  266079 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:50:10.767650  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:10.781263  266079 cli_runner.go:164] Run: docker volume create no-preload-320236 --label name.minikube.sigs.k8s.io=no-preload-320236 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:50:10.815434  266079 oci.go:103] Successfully created a docker volume no-preload-320236
	I1210 06:50:10.815515  266079 cli_runner.go:164] Run: docker run --rm --name no-preload-320236-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-320236 --entrypoint /usr/bin/test -v no-preload-320236:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:50:10.955596  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:11.147814  266079 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.147921  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:50:11.147930  266079 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 136.018µs
	I1210 06:50:11.147938  266079 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:50:11.147949  266079 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.147980  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:50:11.147985  266079 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 37.851µs
	I1210 06:50:11.147993  266079 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:50:11.148003  266079 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148035  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:50:11.148040  266079 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 38.45µs
	I1210 06:50:11.148046  266079 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:50:11.148057  266079 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148094  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:50:11.148099  266079 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 42.962µs
	I1210 06:50:11.148104  266079 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:50:11.148115  266079 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148141  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:50:11.148145  266079 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 31.655µs
	I1210 06:50:11.148150  266079 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:50:11.148159  266079 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148184  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:50:11.148189  266079 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.238µs
	I1210 06:50:11.148194  266079 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:50:11.148204  266079 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148230  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:50:11.148234  266079 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 30.885µs
	I1210 06:50:11.148239  266079 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:50:11.148249  266079 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148274  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:50:11.148278  266079 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.031µs
	I1210 06:50:11.148284  266079 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:50:11.148294  266079 cache.go:87] Successfully saved all images to host disk.
	I1210 06:50:11.453488  266079 oci.go:107] Successfully prepared a docker volume no-preload-320236
	I1210 06:50:11.453561  266079 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	W1210 06:50:11.453701  266079 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 06:50:11.453818  266079 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:50:11.514045  266079 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-320236 --name no-preload-320236 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-320236 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-320236 --network no-preload-320236 --ip 192.168.85.2 --volume no-preload-320236:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:50:11.806086  266079 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Running}}
	I1210 06:50:11.828683  266079 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 06:50:11.856487  266079 cli_runner.go:164] Run: docker exec no-preload-320236 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:50:11.910218  266079 oci.go:144] the created container "no-preload-320236" has a running status.
	I1210 06:50:11.910249  266079 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa...
	I1210 06:50:12.228299  266079 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:50:12.257484  266079 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 06:50:12.280168  266079 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:50:12.280194  266079 kic_runner.go:114] Args: [docker exec --privileged no-preload-320236 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:50:12.338357  266079 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 06:50:12.370328  266079 machine.go:94] provisionDockerMachine start ...
	I1210 06:50:12.370429  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:12.390073  266079 main.go:143] libmachine: Using SSH client type: native
	I1210 06:50:12.390434  266079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:50:12.390449  266079 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:50:12.391179  266079 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 06:50:15.542913  266079 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-320236
	
	I1210 06:50:15.542939  266079 ubuntu.go:182] provisioning hostname "no-preload-320236"
	I1210 06:50:15.542999  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:15.561581  266079 main.go:143] libmachine: Using SSH client type: native
	I1210 06:50:15.561920  266079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:50:15.561935  266079 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-320236 && echo "no-preload-320236" | sudo tee /etc/hostname
	I1210 06:50:15.722059  266079 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-320236
	
	I1210 06:50:15.722142  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:15.739538  266079 main.go:143] libmachine: Using SSH client type: native
	I1210 06:50:15.739873  266079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:50:15.739898  266079 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320236/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:50:15.895518  266079 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:50:15.895544  266079 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 06:50:15.895573  266079 ubuntu.go:190] setting up certificates
	I1210 06:50:15.895590  266079 provision.go:84] configureAuth start
	I1210 06:50:15.895654  266079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 06:50:15.912513  266079 provision.go:143] copyHostCerts
	I1210 06:50:15.912580  266079 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 06:50:15.912592  266079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 06:50:15.912674  266079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 06:50:15.912774  266079 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 06:50:15.912784  266079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 06:50:15.912813  266079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 06:50:15.912868  266079 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 06:50:15.912877  266079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 06:50:15.912906  266079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 06:50:15.912983  266079 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.no-preload-320236 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-320236]
	I1210 06:50:15.998869  266079 provision.go:177] copyRemoteCerts
	I1210 06:50:15.998946  266079 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:50:15.998986  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:16.018837  266079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 06:50:16.122475  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:50:16.139611  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:50:16.157281  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:50:16.174807  266079 provision.go:87] duration metric: took 279.189755ms to configureAuth
	I1210 06:50:16.174838  266079 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:50:16.175038  266079 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:50:16.175051  266079 machine.go:97] duration metric: took 3.80470314s to provisionDockerMachine
	I1210 06:50:16.175057  266079 client.go:176] duration metric: took 5.552073605s to LocalClient.Create
	I1210 06:50:16.175068  266079 start.go:167] duration metric: took 5.552117084s to libmachine.API.Create "no-preload-320236"
	I1210 06:50:16.175075  266079 start.go:293] postStartSetup for "no-preload-320236" (driver="docker")
	I1210 06:50:16.175085  266079 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:50:16.175137  266079 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:50:16.175194  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:16.192492  266079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 06:50:16.294697  266079 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:50:16.297781  266079 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:50:16.297810  266079 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:50:16.297821  266079 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 06:50:16.297875  266079 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 06:50:16.297964  266079 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 06:50:16.298074  266079 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:50:16.305000  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:50:16.321463  266079 start.go:296] duration metric: took 146.373615ms for postStartSetup
	I1210 06:50:16.321906  266079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 06:50:16.339373  266079 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json ...
	I1210 06:50:16.339638  266079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:50:16.339696  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:16.358830  266079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 06:50:16.468340  266079 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:50:16.474028  266079 start.go:128] duration metric: took 5.854640675s to createHost
	I1210 06:50:16.474051  266079 start.go:83] releasing machines lock for "no-preload-320236", held for 5.854788459s
	I1210 06:50:16.474122  266079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 06:50:16.491075  266079 ssh_runner.go:195] Run: cat /version.json
	I1210 06:50:16.491131  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:16.491370  266079 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:50:16.491435  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:16.511825  266079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 06:50:16.526995  266079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 06:50:16.618552  266079 ssh_runner.go:195] Run: systemctl --version
	I1210 06:50:16.708806  266079 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:50:16.713034  266079 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:50:16.713109  266079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:50:16.741364  266079 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 06:50:16.741384  266079 start.go:496] detecting cgroup driver to use...
	I1210 06:50:16.741416  266079 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:50:16.741465  266079 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:50:16.756680  266079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:50:16.769479  266079 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:50:16.769538  266079 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:50:16.786919  266079 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:50:16.805085  266079 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:50:16.924714  266079 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:50:17.049962  266079 docker.go:234] disabling docker service ...
	I1210 06:50:17.050032  266079 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:50:17.072329  266079 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:50:17.086838  266079 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:50:17.204496  266079 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:50:17.320874  266079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:50:17.333477  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:50:17.347729  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:17.565340  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:50:17.575463  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:50:17.584255  266079 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:50:17.584317  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:50:17.593112  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:50:17.601825  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:50:17.610504  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:50:17.619207  266079 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:50:17.627352  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:50:17.636159  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:50:17.644795  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:50:17.653437  266079 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:50:17.660860  266079 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:50:17.667912  266079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:50:17.772294  266079 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:50:17.868517  266079 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:50:17.868615  266079 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:50:17.872832  266079 start.go:564] Will wait 60s for crictl version
	I1210 06:50:17.872895  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:17.876778  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:50:17.911625  266079 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:50:17.911714  266079 ssh_runner.go:195] Run: containerd --version
	I1210 06:50:17.934503  266079 ssh_runner.go:195] Run: containerd --version
	I1210 06:50:17.968781  266079 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 06:50:17.971703  266079 cli_runner.go:164] Run: docker network inspect no-preload-320236 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:50:17.987544  266079 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:50:17.991316  266079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:50:18.002729  266079 kubeadm.go:884] updating cluster {Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:50:18.002929  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:18.171837  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:18.331997  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:18.481039  266079 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:50:18.481126  266079 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:50:18.505475  266079 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 06:50:18.505499  266079 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:50:18.505562  266079 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:18.505769  266079 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:18.505855  266079 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:18.505933  266079 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:18.506040  266079 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:18.506130  266079 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:50:18.506211  266079 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:18.506304  266079 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.507390  266079 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:18.507864  266079 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.508159  266079 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:18.508238  266079 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:50:18.508332  266079 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:18.508378  266079 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:18.508422  266079 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:18.508941  266079 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:18.821197  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1210 06:50:18.821276  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.837491  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1210 06:50:18.837624  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:18.845023  266079 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 06:50:18.845115  266079 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.845217  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.848059  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1210 06:50:18.848126  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:18.861154  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1210 06:50:18.861287  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:18.864373  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 06:50:18.864440  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 06:50:18.866966  266079 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1210 06:50:18.867081  266079 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:18.867166  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.867296  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.885523  266079 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1210 06:50:18.885566  266079 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:18.885615  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.913356  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1210 06:50:18.913424  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:18.916853  266079 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1210 06:50:18.916944  266079 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:18.917032  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.917155  266079 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 06:50:18.917193  266079 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:50:18.917245  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.921439  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1210 06:50:18.921506  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:18.924302  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.924471  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:18.924604  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:18.953250  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:50:18.953395  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:18.953507  266079 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1210 06:50:18.953571  266079 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:18.953649  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.978208  266079 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1210 06:50:18.978254  266079 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:18.978312  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:19.035858  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:19.035939  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:19.036008  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:19.052663  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:19.052759  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:50:19.052829  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:19.052894  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:19.140965  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:19.141038  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 06:50:19.141116  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:50:19.141173  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:19.171865  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:19.171964  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:19.172032  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:50:19.172088  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:19.201381  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 06:50:19.201593  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:50:19.201731  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 06:50:19.201792  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 06:50:19.201967  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1210 06:50:19.202097  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:50:19.290804  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:19.290966  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 06:50:19.291120  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:50:19.291296  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:19.291413  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 06:50:19.291510  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:50:19.291629  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 06:50:19.291682  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1210 06:50:19.291770  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 06:50:19.291925  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1210 06:50:19.371780  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 06:50:19.371891  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:50:19.371954  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 06:50:19.372001  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:50:19.372292  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 06:50:19.372315  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1210 06:50:19.372362  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:50:19.372374  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 06:50:19.471162  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 06:50:19.471271  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1210 06:50:19.477508  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 06:50:19.477603  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1210 06:50:19.515802  266079 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:50:19.516089  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1210 06:50:19.695827  266079 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 06:50:19.695996  266079 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 06:50:19.696072  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:19.836741  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:50:19.856904  266079 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 06:50:19.856952  266079 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:19.857002  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:19.874139  266079 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:50:19.874219  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:50:19.937180  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:21.190446  266079 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.316198574s)
	I1210 06:50:21.190477  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 06:50:21.190494  266079 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:50:21.190551  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:50:21.190629  266079 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.253428979s)
	I1210 06:50:21.190674  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:22.218966  266079 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.028260036s)
	I1210 06:50:22.219089  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:22.219176  266079 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.028608495s)
	I1210 06:50:22.219192  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 06:50:22.219215  266079 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:50:22.219252  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:50:23.544801  266079 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.325669259s)
	I1210 06:50:23.544891  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:50:23.545002  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:50:23.545071  266079 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.32580549s)
	I1210 06:50:23.545141  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 06:50:23.545178  266079 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:50:23.545239  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:50:24.466489  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:50:24.466527  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 06:50:24.466728  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 06:50:24.466792  266079 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:50:24.466875  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:50:25.611589  266079 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.144674621s)
	I1210 06:50:25.611666  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 06:50:25.611694  266079 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:50:25.611767  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:50:26.609696  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 06:50:26.609725  266079 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:50:26.609775  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:50:26.994232  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:50:26.994271  266079 cache_images.go:125] Successfully loaded all cached images
	I1210 06:50:26.994277  266079 cache_images.go:94] duration metric: took 8.488762015s to LoadCachedImages
	I1210 06:50:26.994297  266079 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 06:50:26.994404  266079 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320236 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:50:26.994503  266079 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:50:27.021471  266079 cni.go:84] Creating CNI manager for ""
	I1210 06:50:27.021497  266079 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:50:27.021517  266079 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:50:27.021539  266079 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320236 NodeName:no-preload-320236 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:50:27.021669  266079 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-320236"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:50:27.021747  266079 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:50:27.029836  266079 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 06:50:27.029902  266079 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:50:27.037943  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:27.037959  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1210 06:50:27.038034  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 06:50:27.037942  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256
	I1210 06:50:27.038148  266079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:50:27.038035  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 06:50:27.051706  266079 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 06:50:27.051753  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1210 06:50:27.051883  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 06:50:27.051947  266079 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 06:50:27.051970  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1210 06:50:27.059739  266079 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 06:50:27.059789  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1210 06:50:27.929816  266079 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:50:27.939061  266079 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 06:50:27.952801  266079 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:50:27.966670  266079 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 06:50:27.981130  266079 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:50:27.984820  266079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:50:27.994812  266079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:50:28.107290  266079 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:50:28.124654  266079 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236 for IP: 192.168.85.2
	I1210 06:50:28.124683  266079 certs.go:195] generating shared ca certs ...
	I1210 06:50:28.124715  266079 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.124889  266079 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 06:50:28.124965  266079 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 06:50:28.124999  266079 certs.go:257] generating profile certs ...
	I1210 06:50:28.125078  266079 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.key
	I1210 06:50:28.125098  266079 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt with IP's: []
	I1210 06:50:28.438914  266079 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt ...
	I1210 06:50:28.438949  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: {Name:mk87e6d0d00fdfa55c157efee4f653a866c16600 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.439194  266079 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.key ...
	I1210 06:50:28.439211  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.key: {Name:mk8fa1af6ba001f3c44ba9cb3c76d7ccfa3a8913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.439346  266079 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key.2faa2447
	I1210 06:50:28.439367  266079 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt.2faa2447 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 06:50:28.688325  266079 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt.2faa2447 ...
	I1210 06:50:28.688356  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt.2faa2447: {Name:mkc7ebf7e1f25249f97724e0934b50d7cab2a773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.688534  266079 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key.2faa2447 ...
	I1210 06:50:28.688548  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key.2faa2447: {Name:mkaeb8e263db616770ec5454284d7888c2f59143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.688657  266079 certs.go:382] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt.2faa2447 -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt
	I1210 06:50:28.688747  266079 certs.go:386] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key.2faa2447 -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key
	I1210 06:50:28.688806  266079 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key
	I1210 06:50:28.688826  266079 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.crt with IP's: []
	I1210 06:50:28.923992  266079 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.crt ...
	I1210 06:50:28.924071  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.crt: {Name:mka1b39a38a0d4ece5ba7ee846992485136c8d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.924279  266079 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key ...
	I1210 06:50:28.924339  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key: {Name:mk78055f88bce004615455c1f6210d3942403534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.924560  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 06:50:28.924644  266079 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 06:50:28.924671  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:50:28.924732  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:50:28.924786  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:50:28.924837  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 06:50:28.924918  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:50:28.925545  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:50:28.942768  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:50:28.959961  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:50:28.976970  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 06:50:28.993973  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:50:29.013851  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:50:29.031564  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:50:29.049267  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:50:29.067220  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 06:50:29.086377  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:50:29.103910  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 06:50:29.121296  266079 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:50:29.134219  266079 ssh_runner.go:195] Run: openssl version
	I1210 06:50:29.140393  266079 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 06:50:29.148154  266079 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 06:50:29.155617  266079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 06:50:29.159327  266079 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 06:50:29.159416  266079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 06:50:29.205455  266079 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:50:29.213491  266079 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:50:29.221070  266079 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:50:29.228439  266079 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:50:29.236182  266079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:50:29.240205  266079 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:50:29.240276  266079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:50:29.283604  266079 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:50:29.291346  266079 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:50:29.298958  266079 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 06:50:29.306694  266079 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 06:50:29.314188  266079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 06:50:29.317729  266079 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 06:50:29.317790  266079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 06:50:29.358402  266079 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:50:29.365944  266079 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0
	I1210 06:50:29.373196  266079 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:50:29.376794  266079 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:50:29.376848  266079 kubeadm.go:401] StartCluster: {Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:50:29.376921  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:50:29.376978  266079 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:50:29.405917  266079 cri.go:89] found id: ""
	I1210 06:50:29.405988  266079 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:50:29.418877  266079 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:50:29.427150  266079 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:50:29.427261  266079 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:50:29.437933  266079 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:50:29.438007  266079 kubeadm.go:158] found existing configuration files:
	
	I1210 06:50:29.438110  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:50:29.446767  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:50:29.446874  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:50:29.454604  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:50:29.463119  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:50:29.463230  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:50:29.470664  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:50:29.483114  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:50:29.483228  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:50:29.490744  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:50:29.498555  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:50:29.498622  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:50:29.505904  266079 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:50:29.545803  266079 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:50:29.545951  266079 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:50:29.613904  266079 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:50:29.614011  266079 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:50:29.614070  266079 kubeadm.go:319] OS: Linux
	I1210 06:50:29.614138  266079 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:50:29.614207  266079 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:50:29.614272  266079 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:50:29.614355  266079 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:50:29.614431  266079 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:50:29.614506  266079 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:50:29.614600  266079 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:50:29.614672  266079 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:50:29.614744  266079 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:50:29.689849  266079 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:50:29.690004  266079 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:50:29.690118  266079 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:50:29.699380  266079 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:50:29.709446  266079 out.go:252]   - Generating certificates and keys ...
	I1210 06:50:29.709579  266079 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:50:29.709684  266079 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:50:29.815311  266079 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:50:30.084534  266079 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:50:30.243769  266079 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:50:30.493801  266079 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:50:30.608119  266079 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:50:30.608485  266079 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-320236] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:50:30.756158  266079 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:50:30.756656  266079 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-320236] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:50:31.029200  266079 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:50:31.107257  266079 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:50:31.381355  266079 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:50:31.381715  266079 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:50:31.769307  266079 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:50:32.114938  266079 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:50:32.369786  266079 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:50:32.896406  266079 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:50:32.966609  266079 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:50:32.967762  266079 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:50:32.972159  266079 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:50:32.977107  266079 out.go:252]   - Booting up control plane ...
	I1210 06:50:32.977229  266079 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:50:32.977314  266079 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:50:32.977381  266079 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:50:32.993235  266079 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:50:32.993346  266079 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:50:33.002953  266079 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:50:33.003387  266079 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:50:33.003438  266079 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:50:33.142914  266079 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:50:33.143052  266079 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:50:48.709787  213962 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00113931s
	I1210 06:50:48.709827  213962 kubeadm.go:319] 
	I1210 06:50:48.709894  213962 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:50:48.709931  213962 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:50:48.710036  213962 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:50:48.710050  213962 kubeadm.go:319] 
	I1210 06:50:48.710186  213962 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:50:48.710242  213962 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:50:48.710292  213962 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:50:48.710302  213962 kubeadm.go:319] 
	I1210 06:50:48.714180  213962 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:50:48.714624  213962 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:50:48.714738  213962 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:50:48.714978  213962 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:50:48.714985  213962 kubeadm.go:319] 
	I1210 06:50:48.715093  213962 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:50:48.715151  213962 kubeadm.go:403] duration metric: took 12m9.305275057s to StartCluster
	I1210 06:50:48.715187  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:50:48.715247  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:50:48.740016  213962 cri.go:89] found id: ""
	I1210 06:50:48.740041  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.740050  213962 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:50:48.740058  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:50:48.740121  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:50:48.764372  213962 cri.go:89] found id: ""
	I1210 06:50:48.764396  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.764405  213962 logs.go:284] No container was found matching "etcd"
	I1210 06:50:48.764416  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:50:48.764473  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:50:48.789026  213962 cri.go:89] found id: ""
	I1210 06:50:48.789050  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.789059  213962 logs.go:284] No container was found matching "coredns"
	I1210 06:50:48.789065  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:50:48.789122  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:50:48.818825  213962 cri.go:89] found id: ""
	I1210 06:50:48.818856  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.818872  213962 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:50:48.818882  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:50:48.818965  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:50:48.849712  213962 cri.go:89] found id: ""
	I1210 06:50:48.849790  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.849813  213962 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:50:48.849833  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:50:48.849922  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:50:48.878656  213962 cri.go:89] found id: ""
	I1210 06:50:48.878680  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.878689  213962 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:50:48.878695  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:50:48.878753  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:50:48.904300  213962 cri.go:89] found id: ""
	I1210 06:50:48.904378  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.904402  213962 logs.go:284] No container was found matching "kindnet"
	I1210 06:50:48.904415  213962 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1210 06:50:48.904479  213962 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 06:50:48.943812  213962 cri.go:89] found id: ""
	I1210 06:50:48.943838  213962 logs.go:282] 0 containers: []
	W1210 06:50:48.943846  213962 logs.go:284] No container was found matching "storage-provisioner"
	I1210 06:50:48.943855  213962 logs.go:123] Gathering logs for container status ...
	I1210 06:50:48.943867  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:50:49.015789  213962 logs.go:123] Gathering logs for kubelet ...
	I1210 06:50:49.015816  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:50:49.074369  213962 logs.go:123] Gathering logs for dmesg ...
	I1210 06:50:49.074403  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:50:49.087758  213962 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:50:49.087786  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:50:49.154912  213962 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:50:49.154931  213962 logs.go:123] Gathering logs for containerd ...
	I1210 06:50:49.154943  213962 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1210 06:50:49.194492  213962 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00113931s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:50:49.194552  213962 out.go:285] * 
	W1210 06:50:49.194609  213962 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00113931s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:49.194624  213962 out.go:285] * 
	W1210 06:50:49.196734  213962 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:50:49.202541  213962 out.go:203] 
	W1210 06:50:49.206178  213962 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00113931s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:50:49.206236  213962 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:50:49.206258  213962 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:50:49.210058  213962 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.159334107Z" level=info msg="StopPodSandbox for \"ebed669fc0601053471bc91873af582319cb15c46cf508aca8b7ea894b11320e\" returns successfully"
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.161074557Z" level=info msg="RemovePodSandbox for \"ebed669fc0601053471bc91873af582319cb15c46cf508aca8b7ea894b11320e\""
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.161116781Z" level=info msg="Forcibly stopping sandbox \"ebed669fc0601053471bc91873af582319cb15c46cf508aca8b7ea894b11320e\""
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.161159046Z" level=info msg="Container to stop \"7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.161541344Z" level=info msg="TearDown network for sandbox \"ebed669fc0601053471bc91873af582319cb15c46cf508aca8b7ea894b11320e\" successfully"
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.169671393Z" level=info msg="Ensure that sandbox ebed669fc0601053471bc91873af582319cb15c46cf508aca8b7ea894b11320e in task-service has been cleanup successfully"
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.176023241Z" level=info msg="RemovePodSandbox \"ebed669fc0601053471bc91873af582319cb15c46cf508aca8b7ea894b11320e\" returns successfully"
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.176517918Z" level=info msg="StopPodSandbox for \"70bcdc085503fbd0976c88e3c5f952caf1d96397e148fe091e0a395910a7bd81\""
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.176592265Z" level=info msg="Container to stop \"d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.176979051Z" level=info msg="TearDown network for sandbox \"70bcdc085503fbd0976c88e3c5f952caf1d96397e148fe091e0a395910a7bd81\" successfully"
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.177026715Z" level=info msg="StopPodSandbox for \"70bcdc085503fbd0976c88e3c5f952caf1d96397e148fe091e0a395910a7bd81\" returns successfully"
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.177455380Z" level=info msg="RemovePodSandbox for \"70bcdc085503fbd0976c88e3c5f952caf1d96397e148fe091e0a395910a7bd81\""
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.177491549Z" level=info msg="Forcibly stopping sandbox \"70bcdc085503fbd0976c88e3c5f952caf1d96397e148fe091e0a395910a7bd81\""
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.177524099Z" level=info msg="Container to stop \"d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.177876595Z" level=info msg="TearDown network for sandbox \"70bcdc085503fbd0976c88e3c5f952caf1d96397e148fe091e0a395910a7bd81\" successfully"
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.188528133Z" level=info msg="Ensure that sandbox 70bcdc085503fbd0976c88e3c5f952caf1d96397e148fe091e0a395910a7bd81 in task-service has been cleanup successfully"
	Dec 10 06:42:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:42:44.203358580Z" level=info msg="RemovePodSandbox \"70bcdc085503fbd0976c88e3c5f952caf1d96397e148fe091e0a395910a7bd81\" returns successfully"
	Dec 10 06:47:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:47:44.134197723Z" level=info msg="container event discarded" container=3152a22d10cd6523975e7da4277c5bb3d3292ea613d0fafe470d8dc4d0083988 type=CONTAINER_DELETED_EVENT
	Dec 10 06:47:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:47:44.149538195Z" level=info msg="container event discarded" container=d6d9a7f7805b1b9c4eeca6ce9acd58b811bd647e3cbc858e93fd2ef8021d3bc6 type=CONTAINER_DELETED_EVENT
	Dec 10 06:47:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:47:44.162003365Z" level=info msg="container event discarded" container=7871486fcf0585a77f0ff933811812a239ff7e6f41b7f43ef25beea978f951f0 type=CONTAINER_DELETED_EVENT
	Dec 10 06:47:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:47:44.162181730Z" level=info msg="container event discarded" container=e54a3e6a60029ecc7162901b9f9e6e48eb2ba9d2eb8b8ef25ab729533bac9e04 type=CONTAINER_DELETED_EVENT
	Dec 10 06:47:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:47:44.180473730Z" level=info msg="container event discarded" container=7315a14739cce3b95b4c8e91631d5023736e432a8e9a4b847b67c0e95569c00d type=CONTAINER_DELETED_EVENT
	Dec 10 06:47:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:47:44.180530551Z" level=info msg="container event discarded" container=ebed669fc0601053471bc91873af582319cb15c46cf508aca8b7ea894b11320e type=CONTAINER_DELETED_EVENT
	Dec 10 06:47:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:47:44.198770981Z" level=info msg="container event discarded" container=d059f17863e48845c7548f8e3847b485ce5dd0b3ff8fc70f3740f17ec9fd6f15 type=CONTAINER_DELETED_EVENT
	Dec 10 06:47:44 kubernetes-upgrade-712093 containerd[555]: time="2025-12-10T06:47:44.217255983Z" level=info msg="container event discarded" container=70bcdc085503fbd0976c88e3c5f952caf1d96397e148fe091e0a395910a7bd81 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	
	
	==> kernel <==
	 06:50:50 up  1:33,  0 user,  load average: 1.38, 1.91, 2.25
	Linux kubernetes-upgrade-712093 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:50:47 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:48 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 06:50:48 kubernetes-upgrade-712093 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:48 kubernetes-upgrade-712093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:48 kubernetes-upgrade-712093 kubelet[14144]: E1210 06:50:48.220808   14144 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:48 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:48 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:48 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 06:50:48 kubernetes-upgrade-712093 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:48 kubernetes-upgrade-712093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:48 kubernetes-upgrade-712093 kubelet[14213]: E1210 06:50:48.996425   14213 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:48 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:48 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:49 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 06:50:49 kubernetes-upgrade-712093 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:49 kubernetes-upgrade-712093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:49 kubernetes-upgrade-712093 kubelet[14247]: E1210 06:50:49.783588   14247 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:49 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:49 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:50:50 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 06:50:50 kubernetes-upgrade-712093 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:50 kubernetes-upgrade-712093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:50:50 kubernetes-upgrade-712093 kubelet[14269]: E1210 06:50:50.498184   14269 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:50:50 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:50:50 kubernetes-upgrade-712093 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-712093 -n kubernetes-upgrade-712093
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-712093 -n kubernetes-upgrade-712093: exit status 2 (369.59622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-712093" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-712093" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-712093
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-712093: (2.149432458s)
--- FAIL: TestKubernetesUpgrade (793.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (506.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m25.098930113s)

                                                
                                                
-- stdout --
	* [no-preload-320236] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-320236" primary control-plane node in "no-preload-320236" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:50:10.357147  266079 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:50:10.357356  266079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:50:10.357370  266079 out.go:374] Setting ErrFile to fd 2...
	I1210 06:50:10.357380  266079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:50:10.357776  266079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:50:10.358400  266079 out.go:368] Setting JSON to false
	I1210 06:50:10.359701  266079 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5561,"bootTime":1765343850,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:50:10.359798  266079 start.go:143] virtualization:  
	I1210 06:50:10.364294  266079 out.go:179] * [no-preload-320236] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:50:10.367764  266079 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:50:10.367865  266079 notify.go:221] Checking for updates...
	I1210 06:50:10.374238  266079 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:50:10.377475  266079 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:50:10.380636  266079 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:50:10.383721  266079 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:50:10.386768  266079 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:50:10.390361  266079 config.go:182] Loaded profile config "kubernetes-upgrade-712093": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:50:10.390506  266079 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:50:10.422431  266079 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:50:10.422564  266079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:50:10.506194  266079 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:50:10.497144758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:50:10.506309  266079 docker.go:319] overlay module found
	I1210 06:50:10.509706  266079 out.go:179] * Using the docker driver based on user configuration
	I1210 06:50:10.512587  266079 start.go:309] selected driver: docker
	I1210 06:50:10.512607  266079 start.go:927] validating driver "docker" against <nil>
	I1210 06:50:10.512620  266079 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:50:10.513349  266079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:50:10.566371  266079 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:50:10.557369535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:50:10.566528  266079 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:50:10.566766  266079 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:50:10.569717  266079 out.go:179] * Using Docker driver with root privileges
	I1210 06:50:10.572601  266079 cni.go:84] Creating CNI manager for ""
	I1210 06:50:10.572677  266079 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:50:10.572690  266079 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:50:10.572783  266079 start.go:353] cluster config:
	{Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:50:10.576099  266079 out.go:179] * Starting "no-preload-320236" primary control-plane node in "no-preload-320236" cluster
	I1210 06:50:10.579092  266079 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:50:10.582084  266079 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:50:10.585063  266079 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:50:10.585141  266079 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:50:10.585200  266079 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json ...
	I1210 06:50:10.585229  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json: {Name:mk4bf1092818b21dd1d254a18e84a5343bc61afd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:10.585498  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:10.619049  266079 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:50:10.619077  266079 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 06:50:10.619102  266079 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:50:10.619132  266079 start.go:360] acquireMachinesLock for no-preload-320236: {Name:mk4a67a43519a7e8fad4432e15b5aa1fee295390 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:10.619254  266079 start.go:364] duration metric: took 105.872µs to acquireMachinesLock for "no-preload-320236"
	I1210 06:50:10.619279  266079 start.go:93] Provisioning new machine with config: &{Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:50:10.619371  266079 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:50:10.622717  266079 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:50:10.622957  266079 start.go:159] libmachine.API.Create for "no-preload-320236" (driver="docker")
	I1210 06:50:10.622978  266079 client.go:173] LocalClient.Create starting
	I1210 06:50:10.623149  266079 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem
	I1210 06:50:10.623213  266079 main.go:143] libmachine: Decoding PEM data...
	I1210 06:50:10.623236  266079 main.go:143] libmachine: Parsing certificate...
	I1210 06:50:10.623294  266079 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem
	I1210 06:50:10.623311  266079 main.go:143] libmachine: Decoding PEM data...
	I1210 06:50:10.623322  266079 main.go:143] libmachine: Parsing certificate...
	I1210 06:50:10.623746  266079 cli_runner.go:164] Run: docker network inspect no-preload-320236 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:50:10.651463  266079 cli_runner.go:211] docker network inspect no-preload-320236 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:50:10.651550  266079 network_create.go:284] running [docker network inspect no-preload-320236] to gather additional debugging logs...
	I1210 06:50:10.651574  266079 cli_runner.go:164] Run: docker network inspect no-preload-320236
	W1210 06:50:10.677622  266079 cli_runner.go:211] docker network inspect no-preload-320236 returned with exit code 1
	I1210 06:50:10.677654  266079 network_create.go:287] error running [docker network inspect no-preload-320236]: docker network inspect no-preload-320236: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-320236 not found
	I1210 06:50:10.677669  266079 network_create.go:289] output of [docker network inspect no-preload-320236]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-320236 not found
	
	** /stderr **
	I1210 06:50:10.677769  266079 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:50:10.695532  266079 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d091f932c27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:e7:11:3f:d3:8a} reservation:<nil>}
	I1210 06:50:10.695813  266079 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6dea00dc193 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:ca:73:41:95:cc} reservation:<nil>}
	I1210 06:50:10.696103  266079 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ab13536e79d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:47:e5:9e:c6:4a} reservation:<nil>}
	I1210 06:50:10.696399  266079 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e7447d71e340 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:a2:98:b0:0d:26} reservation:<nil>}
	I1210 06:50:10.696761  266079 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a498f0}
	I1210 06:50:10.696786  266079 network_create.go:124] attempt to create docker network no-preload-320236 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 06:50:10.696847  266079 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-320236 no-preload-320236
	I1210 06:50:10.757486  266079 network_create.go:108] docker network no-preload-320236 192.168.85.0/24 created
	I1210 06:50:10.757517  266079 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-320236" container
	I1210 06:50:10.757591  266079 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:50:10.767650  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:10.781263  266079 cli_runner.go:164] Run: docker volume create no-preload-320236 --label name.minikube.sigs.k8s.io=no-preload-320236 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:50:10.815434  266079 oci.go:103] Successfully created a docker volume no-preload-320236
	I1210 06:50:10.815515  266079 cli_runner.go:164] Run: docker run --rm --name no-preload-320236-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-320236 --entrypoint /usr/bin/test -v no-preload-320236:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:50:10.955596  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:11.147814  266079 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.147921  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:50:11.147930  266079 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 136.018µs
	I1210 06:50:11.147938  266079 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:50:11.147949  266079 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.147980  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:50:11.147985  266079 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 37.851µs
	I1210 06:50:11.147993  266079 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:50:11.148003  266079 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148035  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:50:11.148040  266079 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 38.45µs
	I1210 06:50:11.148046  266079 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:50:11.148057  266079 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148094  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:50:11.148099  266079 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 42.962µs
	I1210 06:50:11.148104  266079 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:50:11.148115  266079 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148141  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:50:11.148145  266079 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 31.655µs
	I1210 06:50:11.148150  266079 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:50:11.148159  266079 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148184  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:50:11.148189  266079 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.238µs
	I1210 06:50:11.148194  266079 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:50:11.148204  266079 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148230  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:50:11.148234  266079 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 30.885µs
	I1210 06:50:11.148239  266079 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:50:11.148249  266079 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:50:11.148274  266079 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:50:11.148278  266079 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 30.031µs
	I1210 06:50:11.148284  266079 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:50:11.148294  266079 cache.go:87] Successfully saved all images to host disk.
	I1210 06:50:11.453488  266079 oci.go:107] Successfully prepared a docker volume no-preload-320236
	I1210 06:50:11.453561  266079 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	W1210 06:50:11.453701  266079 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 06:50:11.453818  266079 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:50:11.514045  266079 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-320236 --name no-preload-320236 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-320236 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-320236 --network no-preload-320236 --ip 192.168.85.2 --volume no-preload-320236:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:50:11.806086  266079 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Running}}
	I1210 06:50:11.828683  266079 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 06:50:11.856487  266079 cli_runner.go:164] Run: docker exec no-preload-320236 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:50:11.910218  266079 oci.go:144] the created container "no-preload-320236" has a running status.
	I1210 06:50:11.910249  266079 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa...
	I1210 06:50:12.228299  266079 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:50:12.257484  266079 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 06:50:12.280168  266079 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:50:12.280194  266079 kic_runner.go:114] Args: [docker exec --privileged no-preload-320236 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:50:12.338357  266079 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 06:50:12.370328  266079 machine.go:94] provisionDockerMachine start ...
	I1210 06:50:12.370429  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:12.390073  266079 main.go:143] libmachine: Using SSH client type: native
	I1210 06:50:12.390434  266079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:50:12.390449  266079 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:50:12.391179  266079 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 06:50:15.542913  266079 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-320236
	
	I1210 06:50:15.542939  266079 ubuntu.go:182] provisioning hostname "no-preload-320236"
	I1210 06:50:15.542999  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:15.561581  266079 main.go:143] libmachine: Using SSH client type: native
	I1210 06:50:15.561920  266079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:50:15.561935  266079 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-320236 && echo "no-preload-320236" | sudo tee /etc/hostname
	I1210 06:50:15.722059  266079 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-320236
	
	I1210 06:50:15.722142  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:15.739538  266079 main.go:143] libmachine: Using SSH client type: native
	I1210 06:50:15.739873  266079 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1210 06:50:15.739898  266079 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320236/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:50:15.895518  266079 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:50:15.895544  266079 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 06:50:15.895573  266079 ubuntu.go:190] setting up certificates
	I1210 06:50:15.895590  266079 provision.go:84] configureAuth start
	I1210 06:50:15.895654  266079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 06:50:15.912513  266079 provision.go:143] copyHostCerts
	I1210 06:50:15.912580  266079 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 06:50:15.912592  266079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 06:50:15.912674  266079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 06:50:15.912774  266079 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 06:50:15.912784  266079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 06:50:15.912813  266079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 06:50:15.912868  266079 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 06:50:15.912877  266079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 06:50:15.912906  266079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 06:50:15.912983  266079 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.no-preload-320236 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-320236]
	I1210 06:50:15.998869  266079 provision.go:177] copyRemoteCerts
	I1210 06:50:15.998946  266079 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:50:15.998986  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:16.018837  266079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 06:50:16.122475  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:50:16.139611  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:50:16.157281  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:50:16.174807  266079 provision.go:87] duration metric: took 279.189755ms to configureAuth
	I1210 06:50:16.174838  266079 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:50:16.175038  266079 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:50:16.175051  266079 machine.go:97] duration metric: took 3.80470314s to provisionDockerMachine
	I1210 06:50:16.175057  266079 client.go:176] duration metric: took 5.552073605s to LocalClient.Create
	I1210 06:50:16.175068  266079 start.go:167] duration metric: took 5.552117084s to libmachine.API.Create "no-preload-320236"
	I1210 06:50:16.175075  266079 start.go:293] postStartSetup for "no-preload-320236" (driver="docker")
	I1210 06:50:16.175085  266079 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:50:16.175137  266079 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:50:16.175194  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:16.192492  266079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 06:50:16.294697  266079 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:50:16.297781  266079 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:50:16.297810  266079 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:50:16.297821  266079 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 06:50:16.297875  266079 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 06:50:16.297964  266079 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 06:50:16.298074  266079 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:50:16.305000  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:50:16.321463  266079 start.go:296] duration metric: took 146.373615ms for postStartSetup
	I1210 06:50:16.321906  266079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 06:50:16.339373  266079 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json ...
	I1210 06:50:16.339638  266079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:50:16.339696  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:16.358830  266079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 06:50:16.468340  266079 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:50:16.474028  266079 start.go:128] duration metric: took 5.854640675s to createHost
	I1210 06:50:16.474051  266079 start.go:83] releasing machines lock for "no-preload-320236", held for 5.854788459s
	I1210 06:50:16.474122  266079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 06:50:16.491075  266079 ssh_runner.go:195] Run: cat /version.json
	I1210 06:50:16.491131  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:16.491370  266079 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:50:16.491435  266079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 06:50:16.511825  266079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 06:50:16.526995  266079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 06:50:16.618552  266079 ssh_runner.go:195] Run: systemctl --version
	I1210 06:50:16.708806  266079 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:50:16.713034  266079 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:50:16.713109  266079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:50:16.741364  266079 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 06:50:16.741384  266079 start.go:496] detecting cgroup driver to use...
	I1210 06:50:16.741416  266079 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:50:16.741465  266079 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:50:16.756680  266079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:50:16.769479  266079 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:50:16.769538  266079 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:50:16.786919  266079 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:50:16.805085  266079 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:50:16.924714  266079 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:50:17.049962  266079 docker.go:234] disabling docker service ...
	I1210 06:50:17.050032  266079 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:50:17.072329  266079 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:50:17.086838  266079 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:50:17.204496  266079 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:50:17.320874  266079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:50:17.333477  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:50:17.347729  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:17.565340  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:50:17.575463  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:50:17.584255  266079 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:50:17.584317  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:50:17.593112  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:50:17.601825  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:50:17.610504  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:50:17.619207  266079 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:50:17.627352  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:50:17.636159  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:50:17.644795  266079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:50:17.653437  266079 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:50:17.660860  266079 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:50:17.667912  266079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:50:17.772294  266079 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:50:17.868517  266079 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:50:17.868615  266079 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:50:17.872832  266079 start.go:564] Will wait 60s for crictl version
	I1210 06:50:17.872895  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:17.876778  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:50:17.911625  266079 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:50:17.911714  266079 ssh_runner.go:195] Run: containerd --version
	I1210 06:50:17.934503  266079 ssh_runner.go:195] Run: containerd --version
	I1210 06:50:17.968781  266079 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 06:50:17.971703  266079 cli_runner.go:164] Run: docker network inspect no-preload-320236 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:50:17.987544  266079 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:50:17.991316  266079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:50:18.002729  266079 kubeadm.go:884] updating cluster {Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:50:18.002929  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:18.171837  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:18.331997  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:18.481039  266079 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:50:18.481126  266079 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:50:18.505475  266079 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 06:50:18.505499  266079 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:50:18.505562  266079 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:18.505769  266079 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:18.505855  266079 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:18.505933  266079 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:18.506040  266079 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:18.506130  266079 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:50:18.506211  266079 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:18.506304  266079 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.507390  266079 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:18.507864  266079 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.508159  266079 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:18.508238  266079 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:50:18.508332  266079 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:18.508378  266079 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:18.508422  266079 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:18.508941  266079 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:18.821197  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1210 06:50:18.821276  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.837491  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1210 06:50:18.837624  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:18.845023  266079 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 06:50:18.845115  266079 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.845217  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.848059  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1210 06:50:18.848126  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:18.861154  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1210 06:50:18.861287  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:18.864373  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 06:50:18.864440  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 06:50:18.866966  266079 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1210 06:50:18.867081  266079 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:18.867166  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.867296  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.885523  266079 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1210 06:50:18.885566  266079 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:18.885615  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.913356  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1210 06:50:18.913424  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:18.916853  266079 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1210 06:50:18.916944  266079 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:18.917032  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.917155  266079 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 06:50:18.917193  266079 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:50:18.917245  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.921439  266079 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1210 06:50:18.921506  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:18.924302  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:18.924471  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:18.924604  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:18.953250  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:50:18.953395  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:18.953507  266079 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1210 06:50:18.953571  266079 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:18.953649  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:18.978208  266079 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1210 06:50:18.978254  266079 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:18.978312  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:19.035858  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:19.035939  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:50:19.036008  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:19.052663  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:19.052759  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:50:19.052829  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:19.052894  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:19.140965  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:50:19.141038  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 06:50:19.141116  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:50:19.141173  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:50:19.171865  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:19.171964  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:19.172032  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:50:19.172088  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:50:19.201381  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 06:50:19.201593  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:50:19.201731  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 06:50:19.201792  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 06:50:19.201967  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1210 06:50:19.202097  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:50:19.290804  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:50:19.290966  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 06:50:19.291120  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:50:19.291296  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:50:19.291413  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 06:50:19.291510  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:50:19.291629  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 06:50:19.291682  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1210 06:50:19.291770  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 06:50:19.291925  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1210 06:50:19.371780  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 06:50:19.371891  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:50:19.371954  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 06:50:19.372001  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:50:19.372292  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 06:50:19.372315  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1210 06:50:19.372362  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:50:19.372374  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 06:50:19.471162  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 06:50:19.471271  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1210 06:50:19.477508  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 06:50:19.477603  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1210 06:50:19.515802  266079 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:50:19.516089  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1210 06:50:19.695827  266079 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 06:50:19.695996  266079 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 06:50:19.696072  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:19.836741  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:50:19.856904  266079 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 06:50:19.856952  266079 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:19.857002  266079 ssh_runner.go:195] Run: which crictl
	I1210 06:50:19.874139  266079 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:50:19.874219  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:50:19.937180  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:21.190446  266079 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.316198574s)
	I1210 06:50:21.190477  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 06:50:21.190494  266079 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:50:21.190551  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:50:21.190629  266079 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.253428979s)
	I1210 06:50:21.190674  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:22.218966  266079 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.028260036s)
	I1210 06:50:22.219089  266079 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:50:22.219176  266079 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.028608495s)
	I1210 06:50:22.219192  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 06:50:22.219215  266079 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:50:22.219252  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:50:23.544801  266079 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.325669259s)
	I1210 06:50:23.544891  266079 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:50:23.545002  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:50:23.545071  266079 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.32580549s)
	I1210 06:50:23.545141  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 06:50:23.545178  266079 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:50:23.545239  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:50:24.466489  266079 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:50:24.466527  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 06:50:24.466728  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 06:50:24.466792  266079 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:50:24.466875  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:50:25.611589  266079 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.144674621s)
	I1210 06:50:25.611666  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 06:50:25.611694  266079 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:50:25.611767  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:50:26.609696  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 06:50:26.609725  266079 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:50:26.609775  266079 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:50:26.994232  266079 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:50:26.994271  266079 cache_images.go:125] Successfully loaded all cached images
	I1210 06:50:26.994277  266079 cache_images.go:94] duration metric: took 8.488762015s to LoadCachedImages
	I1210 06:50:26.994297  266079 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 06:50:26.994404  266079 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320236 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:50:26.994503  266079 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:50:27.021471  266079 cni.go:84] Creating CNI manager for ""
	I1210 06:50:27.021497  266079 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:50:27.021517  266079 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:50:27.021539  266079 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320236 NodeName:no-preload-320236 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:50:27.021669  266079 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-320236"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:50:27.021747  266079 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:50:27.029836  266079 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 06:50:27.029902  266079 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:50:27.037943  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:50:27.037959  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1210 06:50:27.038034  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 06:50:27.037942  266079 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256
	I1210 06:50:27.038148  266079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:50:27.038035  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 06:50:27.051706  266079 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 06:50:27.051753  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1210 06:50:27.051883  266079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 06:50:27.051947  266079 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 06:50:27.051970  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1210 06:50:27.059739  266079 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 06:50:27.059789  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1210 06:50:27.929816  266079 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:50:27.939061  266079 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 06:50:27.952801  266079 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:50:27.966670  266079 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 06:50:27.981130  266079 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:50:27.984820  266079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:50:27.994812  266079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:50:28.107290  266079 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:50:28.124654  266079 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236 for IP: 192.168.85.2
	I1210 06:50:28.124683  266079 certs.go:195] generating shared ca certs ...
	I1210 06:50:28.124715  266079 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.124889  266079 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 06:50:28.124965  266079 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 06:50:28.124999  266079 certs.go:257] generating profile certs ...
	I1210 06:50:28.125078  266079 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.key
	I1210 06:50:28.125098  266079 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt with IP's: []
	I1210 06:50:28.438914  266079 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt ...
	I1210 06:50:28.438949  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: {Name:mk87e6d0d00fdfa55c157efee4f653a866c16600 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.439194  266079 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.key ...
	I1210 06:50:28.439211  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.key: {Name:mk8fa1af6ba001f3c44ba9cb3c76d7ccfa3a8913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.439346  266079 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key.2faa2447
	I1210 06:50:28.439367  266079 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt.2faa2447 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 06:50:28.688325  266079 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt.2faa2447 ...
	I1210 06:50:28.688356  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt.2faa2447: {Name:mkc7ebf7e1f25249f97724e0934b50d7cab2a773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.688534  266079 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key.2faa2447 ...
	I1210 06:50:28.688548  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key.2faa2447: {Name:mkaeb8e263db616770ec5454284d7888c2f59143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.688657  266079 certs.go:382] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt.2faa2447 -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt
	I1210 06:50:28.688747  266079 certs.go:386] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key.2faa2447 -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key
	I1210 06:50:28.688806  266079 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key
	I1210 06:50:28.688826  266079 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.crt with IP's: []
	I1210 06:50:28.923992  266079 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.crt ...
	I1210 06:50:28.924071  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.crt: {Name:mka1b39a38a0d4ece5ba7ee846992485136c8d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.924279  266079 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key ...
	I1210 06:50:28.924339  266079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key: {Name:mk78055f88bce004615455c1f6210d3942403534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:50:28.924560  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 06:50:28.924644  266079 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 06:50:28.924671  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:50:28.924732  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:50:28.924786  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:50:28.924837  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 06:50:28.924918  266079 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:50:28.925545  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:50:28.942768  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:50:28.959961  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:50:28.976970  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 06:50:28.993973  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:50:29.013851  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 06:50:29.031564  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:50:29.049267  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:50:29.067220  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 06:50:29.086377  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:50:29.103910  266079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 06:50:29.121296  266079 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:50:29.134219  266079 ssh_runner.go:195] Run: openssl version
	I1210 06:50:29.140393  266079 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 06:50:29.148154  266079 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 06:50:29.155617  266079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 06:50:29.159327  266079 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 06:50:29.159416  266079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 06:50:29.205455  266079 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:50:29.213491  266079 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:50:29.221070  266079 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:50:29.228439  266079 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:50:29.236182  266079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:50:29.240205  266079 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:50:29.240276  266079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:50:29.283604  266079 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:50:29.291346  266079 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:50:29.298958  266079 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 06:50:29.306694  266079 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 06:50:29.314188  266079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 06:50:29.317729  266079 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 06:50:29.317790  266079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 06:50:29.358402  266079 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:50:29.365944  266079 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0
	I1210 06:50:29.373196  266079 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:50:29.376794  266079 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:50:29.376848  266079 kubeadm.go:401] StartCluster: {Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:50:29.376921  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:50:29.376978  266079 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:50:29.405917  266079 cri.go:89] found id: ""
	I1210 06:50:29.405988  266079 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:50:29.418877  266079 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:50:29.427150  266079 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:50:29.427261  266079 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:50:29.437933  266079 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:50:29.438007  266079 kubeadm.go:158] found existing configuration files:
	
	I1210 06:50:29.438110  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:50:29.446767  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:50:29.446874  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:50:29.454604  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:50:29.463119  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:50:29.463230  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:50:29.470664  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:50:29.483114  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:50:29.483228  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:50:29.490744  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:50:29.498555  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:50:29.498622  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:50:29.505904  266079 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:50:29.545803  266079 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:50:29.545951  266079 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:50:29.613904  266079 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:50:29.614011  266079 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:50:29.614070  266079 kubeadm.go:319] OS: Linux
	I1210 06:50:29.614138  266079 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:50:29.614207  266079 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:50:29.614272  266079 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:50:29.614355  266079 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:50:29.614431  266079 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:50:29.614506  266079 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:50:29.614600  266079 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:50:29.614672  266079 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:50:29.614744  266079 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:50:29.689849  266079 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:50:29.690004  266079 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:50:29.690118  266079 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:50:29.699380  266079 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:50:29.709446  266079 out.go:252]   - Generating certificates and keys ...
	I1210 06:50:29.709579  266079 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:50:29.709684  266079 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:50:29.815311  266079 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:50:30.084534  266079 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:50:30.243769  266079 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:50:30.493801  266079 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:50:30.608119  266079 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:50:30.608485  266079 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-320236] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:50:30.756158  266079 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:50:30.756656  266079 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-320236] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:50:31.029200  266079 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:50:31.107257  266079 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:50:31.381355  266079 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:50:31.381715  266079 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:50:31.769307  266079 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:50:32.114938  266079 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:50:32.369786  266079 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:50:32.896406  266079 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:50:32.966609  266079 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:50:32.967762  266079 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:50:32.972159  266079 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:50:32.977107  266079 out.go:252]   - Booting up control plane ...
	I1210 06:50:32.977229  266079 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:50:32.977314  266079 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:50:32.977381  266079 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:50:32.993235  266079 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:50:32.993346  266079 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:50:33.002953  266079 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:50:33.003387  266079 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:50:33.003438  266079 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:50:33.142914  266079 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:50:33.143052  266079 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:54:33.140774  266079 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001175144s
	I1210 06:54:33.140801  266079 kubeadm.go:319] 
	I1210 06:54:33.140855  266079 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:54:33.140887  266079 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:54:33.140986  266079 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:54:33.140991  266079 kubeadm.go:319] 
	I1210 06:54:33.141089  266079 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:54:33.141120  266079 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:54:33.141149  266079 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:54:33.141153  266079 kubeadm.go:319] 
	I1210 06:54:33.146527  266079 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:54:33.147192  266079 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:54:33.147309  266079 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:54:33.147590  266079 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:54:33.147603  266079 kubeadm.go:319] 
	I1210 06:54:33.147676  266079 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 06:54:33.147826  266079 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-320236] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-320236] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001175144s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-320236] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-320236] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001175144s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:54:33.147907  266079 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 06:54:33.558308  266079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:54:33.571263  266079 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:54:33.571327  266079 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:54:33.578913  266079 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:54:33.578934  266079 kubeadm.go:158] found existing configuration files:
	
	I1210 06:54:33.579038  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:54:33.586619  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:54:33.586692  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:54:33.593911  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:54:33.601305  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:54:33.601388  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:54:33.608595  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:54:33.615845  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:54:33.615908  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:54:33.623056  266079 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:54:33.630524  266079 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:54:33.630587  266079 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:54:33.637776  266079 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:54:33.673984  266079 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:54:33.674283  266079 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:54:33.753248  266079 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:54:33.753326  266079 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:54:33.753369  266079 kubeadm.go:319] OS: Linux
	I1210 06:54:33.753419  266079 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:54:33.753471  266079 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:54:33.753522  266079 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:54:33.753573  266079 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:54:33.753626  266079 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:54:33.753677  266079 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:54:33.753725  266079 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:54:33.753777  266079 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:54:33.753827  266079 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:54:33.814499  266079 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:54:33.814629  266079 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:54:33.814762  266079 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:54:33.823442  266079 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:54:33.825521  266079 out.go:252]   - Generating certificates and keys ...
	I1210 06:54:33.825642  266079 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:54:33.825725  266079 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:54:33.825849  266079 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:54:33.825933  266079 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:54:33.826062  266079 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:54:33.826146  266079 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:54:33.826223  266079 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:54:33.826330  266079 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:54:33.826410  266079 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:54:33.826491  266079 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:54:33.826529  266079 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:54:33.826597  266079 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:54:33.965335  266079 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:54:34.233480  266079 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:54:34.691308  266079 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:54:34.734970  266079 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:54:34.830268  266079 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:54:34.831112  266079 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:54:34.833896  266079 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:54:34.837354  266079 out.go:252]   - Booting up control plane ...
	I1210 06:54:34.837502  266079 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:54:34.837624  266079 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:54:34.837738  266079 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:54:34.857844  266079 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:54:34.857960  266079 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:54:34.865878  266079 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:54:34.866576  266079 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:54:34.866838  266079 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:54:34.995507  266079 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:54:34.995634  266079 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:58:34.995507  266079 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000116079s
	I1210 06:58:34.995538  266079 kubeadm.go:319] 
	I1210 06:58:34.995597  266079 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:58:34.995631  266079 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:58:34.995735  266079 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:58:34.995740  266079 kubeadm.go:319] 
	I1210 06:58:34.995845  266079 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:58:34.995886  266079 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:58:34.995923  266079 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:58:34.995928  266079 kubeadm.go:319] 
	I1210 06:58:35.000052  266079 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:58:35.000496  266079 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:58:35.000614  266079 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:58:35.000866  266079 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:58:35.000872  266079 kubeadm.go:319] 
	I1210 06:58:35.000939  266079 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:58:35.001867  266079 kubeadm.go:403] duration metric: took 8m5.625012416s to StartCluster
	I1210 06:58:35.001964  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:58:35.002061  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:58:35.029739  266079 cri.go:89] found id: ""
	I1210 06:58:35.029800  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.029809  266079 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:58:35.029823  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:58:35.029903  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:58:35.059137  266079 cri.go:89] found id: ""
	I1210 06:58:35.059162  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.059171  266079 logs.go:284] No container was found matching "etcd"
	I1210 06:58:35.059177  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:58:35.059235  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:58:35.084571  266079 cri.go:89] found id: ""
	I1210 06:58:35.084597  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.084606  266079 logs.go:284] No container was found matching "coredns"
	I1210 06:58:35.084613  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:58:35.084678  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:58:35.113733  266079 cri.go:89] found id: ""
	I1210 06:58:35.113756  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.113765  266079 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:58:35.113772  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:58:35.113830  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:58:35.138121  266079 cri.go:89] found id: ""
	I1210 06:58:35.138147  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.138156  266079 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:58:35.138162  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:58:35.138219  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:58:35.164400  266079 cri.go:89] found id: ""
	I1210 06:58:35.164423  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.164432  266079 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:58:35.164438  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:58:35.164496  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:58:35.188393  266079 cri.go:89] found id: ""
	I1210 06:58:35.188416  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.188424  266079 logs.go:284] No container was found matching "kindnet"
	I1210 06:58:35.188434  266079 logs.go:123] Gathering logs for containerd ...
	I1210 06:58:35.188445  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:58:35.229460  266079 logs.go:123] Gathering logs for container status ...
	I1210 06:58:35.229497  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:58:35.258104  266079 logs.go:123] Gathering logs for kubelet ...
	I1210 06:58:35.258133  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:58:35.314798  266079 logs.go:123] Gathering logs for dmesg ...
	I1210 06:58:35.314833  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:58:35.327838  266079 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:58:35.327863  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:58:35.388749  266079 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:58:35.379754    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.380272    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.381844    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.382309    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.383666    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:58:35.379754    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.380272    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.381844    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.382309    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.383666    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 06:58:35.388774  266079 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:58:35.388804  266079 out.go:285] * 
	* 
	W1210 06:58:35.388856  266079 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:58:35.388874  266079 out.go:285] * 
	* 
	W1210 06:58:35.390983  266079 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:58:35.395719  266079 out.go:203] 
	W1210 06:58:35.397686  266079 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:58:35.397726  266079 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:58:35.397746  266079 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:58:35.401447  266079 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-320236
helpers_test.go:244: (dbg) docker inspect no-preload-320236:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	        "Created": "2025-12-10T06:50:11.529745127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 266409,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:50:11.59482855Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hostname",
	        "HostsPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hosts",
	        "LogPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df-json.log",
	        "Name": "/no-preload-320236",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-320236:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-320236",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	                "LowerDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-320236",
	                "Source": "/var/lib/docker/volumes/no-preload-320236/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-320236",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-320236",
	                "name.minikube.sigs.k8s.io": "no-preload-320236",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85ae7e8702e41f92b33b5a42b651a54aa9c0e327b78652a75f1a51d370271f8b",
	            "SandboxKey": "/var/run/docker/netns/85ae7e8702e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-320236": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:07:05:69:57:da",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ffc647756423b4e81a1338ec5e1d5f1765ed1034ce5ae186c5fbcc84bf8cb09",
	                    "EndpointID": "d093b0e10fa0218a37c48573bc31f25266756d6a2b6d0253a5c740e71d806388",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-320236",
	                        "afeff1ab0e38"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236: exit status 6 (332.184791ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:58:35.812235  292704 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-320236" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-320236 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p old-k8s-version-806899                                                                                                                                                                                                                                │ old-k8s-version-806899       │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ start   │ -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-712093                                                                                                                                                                                                                             │ kubernetes-upgrade-712093    │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ start   │ -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-451123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:51 UTC │ 10 Dec 25 06:52 UTC │
	│ stop    │ -p embed-certs-451123 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-451123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ start   │ -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:53 UTC │
	│ image   │ embed-certs-451123 image list --format=json                                                                                                                                                                                                              │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ pause   │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ unpause │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p disable-driver-mounts-595993                                                                                                                                                                                                                          │ disable-driver-mounts-595993 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ stop    │ -p default-k8s-diff-port-395269 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-395269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:55 UTC │
	│ image   │ default-k8s-diff-port-395269 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ pause   │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ unpause │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:55:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:55:54.981794  288031 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:55:54.981926  288031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:54.981937  288031 out.go:374] Setting ErrFile to fd 2...
	I1210 06:55:54.981942  288031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:54.982225  288031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:55:54.982645  288031 out.go:368] Setting JSON to false
	I1210 06:55:54.983532  288031 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5905,"bootTime":1765343850,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:55:54.983604  288031 start.go:143] virtualization:  
	I1210 06:55:54.987589  288031 out.go:179] * [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:55:54.990952  288031 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:55:54.991143  288031 notify.go:221] Checking for updates...
	I1210 06:55:54.999718  288031 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:55:55.004245  288031 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:55:55.007947  288031 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:55:55.011263  288031 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:55:55.014567  288031 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:55:55.018346  288031 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:55:55.018474  288031 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:55:55.050040  288031 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:55:55.050159  288031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:55:55.110692  288031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:55:55.101413341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:55:55.110829  288031 docker.go:319] overlay module found
	I1210 06:55:55.114039  288031 out.go:179] * Using the docker driver based on user configuration
	I1210 06:55:55.116970  288031 start.go:309] selected driver: docker
	I1210 06:55:55.116990  288031 start.go:927] validating driver "docker" against <nil>
	I1210 06:55:55.117003  288031 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:55:55.117774  288031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:55:55.187658  288031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:55:55.175913019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:55:55.187828  288031 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 06:55:55.187862  288031 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 06:55:55.188080  288031 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:55:55.191065  288031 out.go:179] * Using Docker driver with root privileges
	I1210 06:55:55.193975  288031 cni.go:84] Creating CNI manager for ""
	I1210 06:55:55.194040  288031 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:55:55.194060  288031 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:55:55.194137  288031 start.go:353] cluster config:
	{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:55:55.197188  288031 out.go:179] * Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	I1210 06:55:55.199998  288031 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:55:55.202945  288031 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:55:55.205774  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:55:55.205946  288031 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:55:55.228535  288031 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:55:55.228555  288031 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:55:55.253626  288031 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 06:55:55.392999  288031 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 06:55:55.393221  288031 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 06:55:55.393258  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json: {Name:mke358d8c3878b6ccc086ae75b08bfbb6079278d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:55:55.393289  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.393417  288031 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:55:55.393461  288031 start.go:360] acquireMachinesLock for newest-cni-168808: {Name:mk3e9e7ddd89f37944cb01c45725514e76e5ba82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.393543  288031 start.go:364] duration metric: took 46.523µs to acquireMachinesLock for "newest-cni-168808"
	I1210 06:55:55.393571  288031 start.go:93] Provisioning new machine with config: &{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:55:55.393679  288031 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:55:55.397127  288031 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:55:55.397358  288031 start.go:159] libmachine.API.Create for "newest-cni-168808" (driver="docker")
	I1210 06:55:55.397385  288031 client.go:173] LocalClient.Create starting
	I1210 06:55:55.397438  288031 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem
	I1210 06:55:55.397479  288031 main.go:143] libmachine: Decoding PEM data...
	I1210 06:55:55.397497  288031 main.go:143] libmachine: Parsing certificate...
	I1210 06:55:55.397545  288031 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem
	I1210 06:55:55.397561  288031 main.go:143] libmachine: Decoding PEM data...
	I1210 06:55:55.397572  288031 main.go:143] libmachine: Parsing certificate...
	I1210 06:55:55.397949  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:55:55.421587  288031 cli_runner.go:211] docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:55:55.421662  288031 network_create.go:284] running [docker network inspect newest-cni-168808] to gather additional debugging logs...
	I1210 06:55:55.421680  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808
	W1210 06:55:55.440445  288031 cli_runner.go:211] docker network inspect newest-cni-168808 returned with exit code 1
	I1210 06:55:55.440476  288031 network_create.go:287] error running [docker network inspect newest-cni-168808]: docker network inspect newest-cni-168808: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-168808 not found
	I1210 06:55:55.440491  288031 network_create.go:289] output of [docker network inspect newest-cni-168808]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-168808 not found
	
	** /stderr **
	I1210 06:55:55.440592  288031 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:55:55.472278  288031 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d091f932c27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:e7:11:3f:d3:8a} reservation:<nil>}
	I1210 06:55:55.472550  288031 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6dea00dc193 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:ca:73:41:95:cc} reservation:<nil>}
	I1210 06:55:55.472849  288031 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ab13536e79d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:47:e5:9e:c6:4a} reservation:<nil>}
	I1210 06:55:55.473245  288031 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198fe00}
	I1210 06:55:55.473272  288031 network_create.go:124] attempt to create docker network newest-cni-168808 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 06:55:55.473327  288031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-168808 newest-cni-168808
	I1210 06:55:55.535150  288031 network_create.go:108] docker network newest-cni-168808 192.168.76.0/24 created
	I1210 06:55:55.535181  288031 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-168808" container
	I1210 06:55:55.535292  288031 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:55:55.551392  288031 cli_runner.go:164] Run: docker volume create newest-cni-168808 --label name.minikube.sigs.k8s.io=newest-cni-168808 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:55:55.554117  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.578140  288031 oci.go:103] Successfully created a docker volume newest-cni-168808
	I1210 06:55:55.578234  288031 cli_runner.go:164] Run: docker run --rm --name newest-cni-168808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-168808 --entrypoint /usr/bin/test -v newest-cni-168808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:55:55.718018  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.932804  288031 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.932932  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:55:55.932947  288031 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 167.936µs
	I1210 06:55:55.932957  288031 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:55:55.932978  288031 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933015  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:55:55.933025  288031 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 53.498µs
	I1210 06:55:55.933032  288031 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933044  288031 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933075  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:55:55.933085  288031 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 42.708µs
	I1210 06:55:55.933092  288031 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933106  288031 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933143  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:55:55.933152  288031 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 47.762µs
	I1210 06:55:55.933164  288031 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933176  288031 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933206  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:55:55.933216  288031 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 41.01µs
	I1210 06:55:55.933228  288031 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933236  288031 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933268  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:55:55.933277  288031 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.945µs
	I1210 06:55:55.933283  288031 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:55:55.933292  288031 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933320  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:55:55.933328  288031 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 37.703µs
	I1210 06:55:55.933334  288031 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:55:55.933343  288031 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933369  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:55:55.933381  288031 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.287µs
	I1210 06:55:55.933387  288031 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:55:55.933393  288031 cache.go:87] Successfully saved all images to host disk.
	I1210 06:55:56.133246  288031 oci.go:107] Successfully prepared a docker volume newest-cni-168808
	I1210 06:55:56.133310  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	W1210 06:55:56.133458  288031 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 06:55:56.133555  288031 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:55:56.190219  288031 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-168808 --name newest-cni-168808 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-168808 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-168808 --network newest-cni-168808 --ip 192.168.76.2 --volume newest-cni-168808:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:55:56.510233  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Running}}
	I1210 06:55:56.532276  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:56.559120  288031 cli_runner.go:164] Run: docker exec newest-cni-168808 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:55:56.616474  288031 oci.go:144] the created container "newest-cni-168808" has a running status.
	I1210 06:55:56.616510  288031 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa...
	I1210 06:55:56.920989  288031 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:55:56.944042  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:56.969366  288031 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:55:56.969535  288031 kic_runner.go:114] Args: [docker exec --privileged newest-cni-168808 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:55:57.033434  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:57.058007  288031 machine.go:94] provisionDockerMachine start ...
	I1210 06:55:57.058103  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:55:57.089237  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:55:57.089566  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:55:57.089575  288031 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:55:57.090220  288031 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58770->127.0.0.1:33093: read: connection reset by peer
	I1210 06:56:00.364112  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 06:56:00.364135  288031 ubuntu.go:182] provisioning hostname "newest-cni-168808"
	I1210 06:56:00.364212  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.456773  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:56:00.457119  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:56:00.457133  288031 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-168808 && echo "newest-cni-168808" | sudo tee /etc/hostname
	I1210 06:56:00.645316  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 06:56:00.645407  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.664033  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:56:00.664382  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:56:00.664404  288031 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-168808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-168808/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-168808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:56:00.815306  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:56:00.815331  288031 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 06:56:00.815364  288031 ubuntu.go:190] setting up certificates
	I1210 06:56:00.815372  288031 provision.go:84] configureAuth start
	I1210 06:56:00.815439  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:00.832798  288031 provision.go:143] copyHostCerts
	I1210 06:56:00.832883  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 06:56:00.832898  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 06:56:00.832975  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 06:56:00.833075  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 06:56:00.833087  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 06:56:00.833119  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 06:56:00.833186  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 06:56:00.833196  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 06:56:00.833222  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 06:56:00.833276  288031 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-168808 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-168808]
	I1210 06:56:00.918781  288031 provision.go:177] copyRemoteCerts
	I1210 06:56:00.919089  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:56:00.919173  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.937214  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.043240  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:56:01.061326  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:56:01.079140  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:56:01.096712  288031 provision.go:87] duration metric: took 281.317584ms to configureAuth
	I1210 06:56:01.096743  288031 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:56:01.096994  288031 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:56:01.097006  288031 machine.go:97] duration metric: took 4.038973217s to provisionDockerMachine
	I1210 06:56:01.097025  288031 client.go:176] duration metric: took 5.699623594s to LocalClient.Create
	I1210 06:56:01.097050  288031 start.go:167] duration metric: took 5.699693115s to libmachine.API.Create "newest-cni-168808"
	I1210 06:56:01.097057  288031 start.go:293] postStartSetup for "newest-cni-168808" (driver="docker")
	I1210 06:56:01.097073  288031 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:56:01.097147  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:56:01.097204  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.117411  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.225094  288031 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:56:01.228823  288031 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:56:01.228858  288031 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:56:01.228870  288031 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 06:56:01.228945  288031 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 06:56:01.229044  288031 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 06:56:01.229154  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:56:01.237207  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:56:01.255822  288031 start.go:296] duration metric: took 158.728391ms for postStartSetup
	I1210 06:56:01.256262  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:01.275219  288031 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 06:56:01.275529  288031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:56:01.275586  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.293397  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.396136  288031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:56:01.401043  288031 start.go:128] duration metric: took 6.00734179s to createHost
	I1210 06:56:01.401068  288031 start.go:83] releasing machines lock for "newest-cni-168808", held for 6.007509906s
	I1210 06:56:01.401140  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:01.417888  288031 ssh_runner.go:195] Run: cat /version.json
	I1210 06:56:01.417948  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.418253  288031 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:56:01.418318  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.442401  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.449051  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.632926  288031 ssh_runner.go:195] Run: systemctl --version
	I1210 06:56:01.640549  288031 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:56:01.645141  288031 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:56:01.645218  288031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:56:01.673901  288031 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 06:56:01.673935  288031 start.go:496] detecting cgroup driver to use...
	I1210 06:56:01.673969  288031 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:56:01.674032  288031 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:56:01.689298  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:56:01.702121  288031 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:56:01.702192  288031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:56:01.720186  288031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:56:01.738710  288031 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:56:01.852215  288031 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:56:01.989095  288031 docker.go:234] disabling docker service ...
	I1210 06:56:01.989232  288031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:56:02.016451  288031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:56:02.030687  288031 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:56:02.153586  288031 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:56:02.280278  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:56:02.293652  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:56:02.308576  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:02.458303  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:56:02.467239  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:56:02.475789  288031 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:56:02.475860  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:56:02.484995  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:56:02.493944  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:56:02.503478  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:56:02.512024  288031 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:56:02.520354  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:56:02.529401  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:56:02.538409  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:56:02.548300  288031 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:56:02.556042  288031 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:56:02.563716  288031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:56:02.677702  288031 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:56:02.766228  288031 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:56:02.766303  288031 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:56:02.770737  288031 start.go:564] Will wait 60s for crictl version
	I1210 06:56:02.770834  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:02.775190  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:56:02.800314  288031 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:56:02.800416  288031 ssh_runner.go:195] Run: containerd --version
	I1210 06:56:02.821570  288031 ssh_runner.go:195] Run: containerd --version
	I1210 06:56:02.847675  288031 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 06:56:02.850751  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:56:02.867882  288031 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:56:02.871991  288031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:56:02.885356  288031 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:56:02.888273  288031 kubeadm.go:884] updating cluster {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:56:02.888501  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.049684  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.199179  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.344408  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:56:03.344500  288031 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:56:03.372099  288031 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 06:56:03.372123  288031 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:56:03.372188  288031 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:03.372216  288031 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.372401  288031 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.372426  288031 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.372484  288031 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.372525  288031 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.372561  288031 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.372197  288031 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.374594  288031 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.374594  288031 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.374671  288031 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.374725  288031 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.374874  288031 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.374973  288031 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:03.374986  288031 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.375071  288031 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.727178  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1210 06:56:03.727250  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.731066  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1210 06:56:03.731131  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.735451  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1210 06:56:03.735512  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.736230  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1210 06:56:03.736288  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.743134  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 06:56:03.743203  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 06:56:03.749746  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1210 06:56:03.749821  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.753657  288031 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1210 06:56:03.753695  288031 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.753742  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.773282  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1210 06:56:03.773355  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.790557  288031 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 06:56:03.790597  288031 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.790644  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.790733  288031 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1210 06:56:03.790752  288031 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.790779  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.799555  288031 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1210 06:56:03.799644  288031 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.799725  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.806996  288031 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 06:56:03.807106  288031 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.807186  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.814114  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.814221  288031 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1210 06:56:03.814280  288031 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.814358  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.826776  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.826945  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.827124  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.827225  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:03.827327  288031 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1210 06:56:03.827372  288031 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.827436  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.903162  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.903368  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.906563  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.906718  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.906821  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.906908  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.907050  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:04.003323  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:04.003515  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:04.011136  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:04.011298  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:04.011413  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:04.011544  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:04.011642  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:04.089211  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:04.089350  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 06:56:04.089480  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:04.134911  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1210 06:56:04.135033  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:04.135102  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 06:56:04.135154  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:56:04.135223  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:04.135271  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 06:56:04.135322  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:04.135372  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:04.135418  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:04.155745  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.155780  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1210 06:56:04.155836  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:04.155928  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:04.221987  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.222077  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1210 06:56:04.222179  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 06:56:04.222222  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1210 06:56:04.222311  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:56:04.222345  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 06:56:04.222453  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.222565  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.222646  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 06:56:04.222687  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 06:56:04.222775  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.222808  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1210 06:56:04.300685  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.300730  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1210 06:56:04.320496  288031 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:56:04.321128  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1210 06:56:04.472464  288031 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 06:56:04.472630  288031 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 06:56:04.472710  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.604775  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:56:04.616616  288031 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 06:56:04.616662  288031 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.616713  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:04.705496  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.795703  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.795789  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.834471  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:06.074424  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.27860408s)
	I1210 06:56:06.074538  288031 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.240038061s)
	I1210 06:56:06.074651  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:06.074744  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 06:56:06.074784  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:06.074841  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:06.117004  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:56:06.117113  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:07.020903  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 06:56:07.020935  288031 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:07.020987  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:07.021057  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:56:07.021071  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 06:56:08.105154  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.084144622s)
	I1210 06:56:08.105190  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 06:56:08.105213  288031 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:08.105277  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:09.435879  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.330576141s)
	I1210 06:56:09.435909  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 06:56:09.435927  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:09.435980  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:10.441205  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.005199332s)
	I1210 06:56:10.441234  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 06:56:10.441253  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:10.441308  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:11.471539  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.03020309s)
	I1210 06:56:11.471569  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 06:56:11.471585  288031 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:11.471630  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:11.808584  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:56:11.808617  288031 cache_images.go:125] Successfully loaded all cached images
	I1210 06:56:11.808624  288031 cache_images.go:94] duration metric: took 8.436487473s to LoadCachedImages
	I1210 06:56:11.808636  288031 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 06:56:11.808725  288031 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-168808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:56:11.808792  288031 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:56:11.836989  288031 cni.go:84] Creating CNI manager for ""
	I1210 06:56:11.837009  288031 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:56:11.837023  288031 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:56:11.837046  288031 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-168808 NodeName:newest-cni-168808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:56:11.837170  288031 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-168808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:56:11.837238  288031 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:56:11.845539  288031 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 06:56:11.845605  288031 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:56:11.853470  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1210 06:56:11.853499  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256
	I1210 06:56:11.853544  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:56:11.853564  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 06:56:11.853477  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:11.853636  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 06:56:11.870493  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 06:56:11.870518  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 06:56:11.870493  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 06:56:11.870541  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1210 06:56:11.870547  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1210 06:56:11.892072  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 06:56:11.892110  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1210 06:56:12.684721  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:56:12.692932  288031 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 06:56:12.706015  288031 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:56:12.719741  288031 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1210 06:56:12.733262  288031 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:56:12.737005  288031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:56:12.746629  288031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:56:12.858808  288031 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:56:12.875513  288031 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808 for IP: 192.168.76.2
	I1210 06:56:12.875541  288031 certs.go:195] generating shared ca certs ...
	I1210 06:56:12.875592  288031 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:12.875802  288031 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 06:56:12.875887  288031 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 06:56:12.875902  288031 certs.go:257] generating profile certs ...
	I1210 06:56:12.875985  288031 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key
	I1210 06:56:12.876002  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt with IP's: []
	I1210 06:56:13.076032  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt ...
	I1210 06:56:13.076068  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt: {Name:mkf7bb14938883b10d68a49b8ce34d3c2146efc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.076259  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key ...
	I1210 06:56:13.076271  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key: {Name:mk990176085bdcef2cd12b2c8873345669259230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.076363  288031 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb
	I1210 06:56:13.076378  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 06:56:13.460966  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb ...
	I1210 06:56:13.461005  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb: {Name:mk5f1859a12684f1b2417133b2abe5b0cc7114b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.461185  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb ...
	I1210 06:56:13.461201  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb: {Name:mk2fe3162e58fbb8aab1f63fc8fe494c68c7632e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.461286  288031 certs.go:382] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt
	I1210 06:56:13.461362  288031 certs.go:386] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key
	I1210 06:56:13.461420  288031 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key
	I1210 06:56:13.461442  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt with IP's: []
	I1210 06:56:13.583028  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt ...
	I1210 06:56:13.583055  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt: {Name:mk85677ff817d69f49f025f68ba6ab54589ffc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.583231  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key ...
	I1210 06:56:13.583244  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key: {Name:mke6a5c0bf07d17ef15ab36a3c463f1af3ef2e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.583429  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 06:56:13.583478  288031 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 06:56:13.583491  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:56:13.583519  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:56:13.583547  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:56:13.583575  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 06:56:13.583632  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:56:13.584222  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:56:13.602582  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:56:13.622006  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:56:13.639862  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 06:56:13.658651  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:56:13.680241  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:56:13.700023  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:56:13.719444  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:56:13.736929  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 06:56:13.754184  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:56:13.772309  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 06:56:13.789835  288031 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:56:13.801999  288031 ssh_runner.go:195] Run: openssl version
	I1210 06:56:13.808616  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.815940  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 06:56:13.823193  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.826846  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.826907  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.867540  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:56:13.875137  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:56:13.882628  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.890295  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:56:13.898236  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.902139  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.902206  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.945638  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:56:13.954270  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:56:13.962740  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.971630  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 06:56:13.979227  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.983241  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.983361  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 06:56:14.024714  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:56:14.032691  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0
	I1210 06:56:14.040565  288031 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:56:14.044474  288031 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:56:14.044584  288031 kubeadm.go:401] StartCluster: {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:56:14.044664  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:56:14.044727  288031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:56:14.070428  288031 cri.go:89] found id: ""
	I1210 06:56:14.070496  288031 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:56:14.078638  288031 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:56:14.086602  288031 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:56:14.086714  288031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:56:14.094816  288031 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:56:14.094840  288031 kubeadm.go:158] found existing configuration files:
	
	I1210 06:56:14.094921  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:56:14.102760  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:56:14.102835  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:56:14.110132  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:56:14.117992  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:56:14.118105  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:56:14.125816  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:56:14.133574  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:56:14.133680  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:56:14.141074  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:56:14.148896  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:56:14.148967  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:56:14.156718  288031 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:56:14.194063  288031 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:56:14.194238  288031 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:56:14.263671  288031 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:56:14.263788  288031 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:56:14.263850  288031 kubeadm.go:319] OS: Linux
	I1210 06:56:14.263931  288031 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:56:14.264002  288031 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:56:14.264081  288031 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:56:14.264151  288031 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:56:14.264228  288031 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:56:14.264299  288031 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:56:14.264372  288031 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:56:14.264442  288031 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:56:14.264516  288031 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:56:14.342503  288031 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:56:14.342615  288031 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:56:14.342711  288031 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:56:14.355434  288031 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:56:14.365012  288031 out.go:252]   - Generating certificates and keys ...
	I1210 06:56:14.365181  288031 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:56:14.365286  288031 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:56:14.676353  288031 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:56:14.776617  288031 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:56:14.831643  288031 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:56:15.344970  288031 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:56:15.738235  288031 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:56:15.738572  288031 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:56:15.867481  288031 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:56:15.867849  288031 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:56:16.524781  288031 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:56:16.857089  288031 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:56:17.277023  288031 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:56:17.277264  288031 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:56:17.403345  288031 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:56:17.551288  288031 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:56:17.791106  288031 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:56:17.963150  288031 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:56:18.214947  288031 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:56:18.216045  288031 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:56:18.219851  288031 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:56:18.238517  288031 out.go:252]   - Booting up control plane ...
	I1210 06:56:18.238649  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:56:18.238733  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:56:18.238803  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:56:18.250848  288031 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:56:18.250999  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:56:18.258800  288031 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:56:18.259935  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:56:18.260158  288031 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:56:18.423681  288031 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:56:18.423807  288031 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:58:34.995507  266079 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000116079s
	I1210 06:58:34.995538  266079 kubeadm.go:319] 
	I1210 06:58:34.995597  266079 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:58:34.995631  266079 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:58:34.995735  266079 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:58:34.995740  266079 kubeadm.go:319] 
	I1210 06:58:34.995845  266079 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:58:34.995886  266079 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:58:34.995923  266079 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:58:34.995928  266079 kubeadm.go:319] 
	I1210 06:58:35.000052  266079 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:58:35.000496  266079 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:58:35.000614  266079 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:58:35.000866  266079 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:58:35.000872  266079 kubeadm.go:319] 
	I1210 06:58:35.000939  266079 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:58:35.001867  266079 kubeadm.go:403] duration metric: took 8m5.625012416s to StartCluster
	I1210 06:58:35.001964  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:58:35.002061  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:58:35.029739  266079 cri.go:89] found id: ""
	I1210 06:58:35.029800  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.029809  266079 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:58:35.029823  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:58:35.029903  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:58:35.059137  266079 cri.go:89] found id: ""
	I1210 06:58:35.059162  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.059171  266079 logs.go:284] No container was found matching "etcd"
	I1210 06:58:35.059177  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:58:35.059235  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:58:35.084571  266079 cri.go:89] found id: ""
	I1210 06:58:35.084597  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.084606  266079 logs.go:284] No container was found matching "coredns"
	I1210 06:58:35.084613  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:58:35.084678  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:58:35.113733  266079 cri.go:89] found id: ""
	I1210 06:58:35.113756  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.113765  266079 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:58:35.113772  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:58:35.113830  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:58:35.138121  266079 cri.go:89] found id: ""
	I1210 06:58:35.138147  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.138156  266079 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:58:35.138162  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:58:35.138219  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:58:35.164400  266079 cri.go:89] found id: ""
	I1210 06:58:35.164423  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.164432  266079 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:58:35.164438  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:58:35.164496  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:58:35.188393  266079 cri.go:89] found id: ""
	I1210 06:58:35.188416  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.188424  266079 logs.go:284] No container was found matching "kindnet"
	I1210 06:58:35.188434  266079 logs.go:123] Gathering logs for containerd ...
	I1210 06:58:35.188445  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:58:35.229460  266079 logs.go:123] Gathering logs for container status ...
	I1210 06:58:35.229497  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:58:35.258104  266079 logs.go:123] Gathering logs for kubelet ...
	I1210 06:58:35.258133  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:58:35.314798  266079 logs.go:123] Gathering logs for dmesg ...
	I1210 06:58:35.314833  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:58:35.327838  266079 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:58:35.327863  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:58:35.388749  266079 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:58:35.379754    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.380272    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.381844    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.382309    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.383666    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:58:35.379754    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.380272    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.381844    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.382309    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.383666    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 06:58:35.388774  266079 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:58:35.388804  266079 out.go:285] * 
	W1210 06:58:35.388856  266079 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:58:35.388874  266079 out.go:285] * 
	W1210 06:58:35.390983  266079 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:58:35.395719  266079 out.go:203] 
	W1210 06:58:35.397686  266079 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:58:35.397726  266079 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:58:35.397746  266079 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:58:35.401447  266079 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:50:21 no-preload-320236 containerd[758]: time="2025-12-10T06:50:21.196813280Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.210073933Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.212364720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.226922228Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.227913310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.535290347Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.537474679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.544644107Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.545322891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.456656579Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.458899750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.466570582Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.467486192Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.601587990Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.603772633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.613560498Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.614339090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.601365588Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.603910785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.611697236Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.612195825Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.983871691Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.986420408Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.993743905Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.994155757Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:58:36.481325    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:36.482004    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:36.484017    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:36.484564    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:36.485713    5533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	
	
	==> kernel <==
	 06:58:36 up  1:41,  0 user,  load average: 1.00, 1.53, 1.98
	Linux no-preload-320236 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:58:33 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:34 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 06:58:34 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:34 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:34 no-preload-320236 kubelet[5344]: E1210 06:58:34.224718    5344 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:34 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:34 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:34 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 06:58:34 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:34 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:34 no-preload-320236 kubelet[5349]: E1210 06:58:34.966757    5349 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:34 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:34 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:35 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 06:58:35 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:35 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:35 no-preload-320236 kubelet[5440]: E1210 06:58:35.721077    5440 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:35 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:35 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:36 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 06:58:36 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:36 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:36 no-preload-320236 kubelet[5537]: E1210 06:58:36.489326    5537 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:36 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:36 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236: exit status 6 (343.71768ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:58:36.954394  292930 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-320236" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-320236" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (506.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (507.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1210 06:56:24.751776    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:38.876219    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:56:44.570788    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m25.923552513s)

                                                
                                                
-- stdout --
	* [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:55:54.981794  288031 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:55:54.981926  288031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:54.981937  288031 out.go:374] Setting ErrFile to fd 2...
	I1210 06:55:54.981942  288031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:54.982225  288031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:55:54.982645  288031 out.go:368] Setting JSON to false
	I1210 06:55:54.983532  288031 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5905,"bootTime":1765343850,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:55:54.983604  288031 start.go:143] virtualization:  
	I1210 06:55:54.987589  288031 out.go:179] * [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:55:54.990952  288031 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:55:54.991143  288031 notify.go:221] Checking for updates...
	I1210 06:55:54.999718  288031 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:55:55.004245  288031 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:55:55.007947  288031 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:55:55.011263  288031 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:55:55.014567  288031 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:55:55.018346  288031 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:55:55.018474  288031 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:55:55.050040  288031 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:55:55.050159  288031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:55:55.110692  288031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:55:55.101413341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:55:55.110829  288031 docker.go:319] overlay module found
	I1210 06:55:55.114039  288031 out.go:179] * Using the docker driver based on user configuration
	I1210 06:55:55.116970  288031 start.go:309] selected driver: docker
	I1210 06:55:55.116990  288031 start.go:927] validating driver "docker" against <nil>
	I1210 06:55:55.117003  288031 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:55:55.117774  288031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:55:55.187658  288031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:55:55.175913019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:55:55.187828  288031 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 06:55:55.187862  288031 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 06:55:55.188080  288031 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:55:55.191065  288031 out.go:179] * Using Docker driver with root privileges
	I1210 06:55:55.193975  288031 cni.go:84] Creating CNI manager for ""
	I1210 06:55:55.194040  288031 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:55:55.194060  288031 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:55:55.194137  288031 start.go:353] cluster config:
	{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:55:55.197188  288031 out.go:179] * Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	I1210 06:55:55.199998  288031 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:55:55.202945  288031 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:55:55.205774  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:55:55.205946  288031 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:55:55.228535  288031 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:55:55.228555  288031 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:55:55.253626  288031 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 06:55:55.392999  288031 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 06:55:55.393221  288031 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 06:55:55.393258  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json: {Name:mke358d8c3878b6ccc086ae75b08bfbb6079278d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:55:55.393289  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.393417  288031 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:55:55.393461  288031 start.go:360] acquireMachinesLock for newest-cni-168808: {Name:mk3e9e7ddd89f37944cb01c45725514e76e5ba82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.393543  288031 start.go:364] duration metric: took 46.523µs to acquireMachinesLock for "newest-cni-168808"
	I1210 06:55:55.393571  288031 start.go:93] Provisioning new machine with config: &{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:55:55.393679  288031 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:55:55.397127  288031 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:55:55.397358  288031 start.go:159] libmachine.API.Create for "newest-cni-168808" (driver="docker")
	I1210 06:55:55.397385  288031 client.go:173] LocalClient.Create starting
	I1210 06:55:55.397438  288031 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem
	I1210 06:55:55.397479  288031 main.go:143] libmachine: Decoding PEM data...
	I1210 06:55:55.397497  288031 main.go:143] libmachine: Parsing certificate...
	I1210 06:55:55.397545  288031 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem
	I1210 06:55:55.397561  288031 main.go:143] libmachine: Decoding PEM data...
	I1210 06:55:55.397572  288031 main.go:143] libmachine: Parsing certificate...
	I1210 06:55:55.397949  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:55:55.421587  288031 cli_runner.go:211] docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:55:55.421662  288031 network_create.go:284] running [docker network inspect newest-cni-168808] to gather additional debugging logs...
	I1210 06:55:55.421680  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808
	W1210 06:55:55.440445  288031 cli_runner.go:211] docker network inspect newest-cni-168808 returned with exit code 1
	I1210 06:55:55.440476  288031 network_create.go:287] error running [docker network inspect newest-cni-168808]: docker network inspect newest-cni-168808: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-168808 not found
	I1210 06:55:55.440491  288031 network_create.go:289] output of [docker network inspect newest-cni-168808]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-168808 not found
	
	** /stderr **
	I1210 06:55:55.440592  288031 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:55:55.472278  288031 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d091f932c27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:e7:11:3f:d3:8a} reservation:<nil>}
	I1210 06:55:55.472550  288031 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6dea00dc193 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:ca:73:41:95:cc} reservation:<nil>}
	I1210 06:55:55.472849  288031 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ab13536e79d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:47:e5:9e:c6:4a} reservation:<nil>}
	I1210 06:55:55.473245  288031 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198fe00}
	I1210 06:55:55.473272  288031 network_create.go:124] attempt to create docker network newest-cni-168808 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 06:55:55.473327  288031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-168808 newest-cni-168808
	I1210 06:55:55.535150  288031 network_create.go:108] docker network newest-cni-168808 192.168.76.0/24 created
	I1210 06:55:55.535181  288031 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-168808" container
	I1210 06:55:55.535292  288031 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:55:55.551392  288031 cli_runner.go:164] Run: docker volume create newest-cni-168808 --label name.minikube.sigs.k8s.io=newest-cni-168808 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:55:55.554117  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.578140  288031 oci.go:103] Successfully created a docker volume newest-cni-168808
	I1210 06:55:55.578234  288031 cli_runner.go:164] Run: docker run --rm --name newest-cni-168808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-168808 --entrypoint /usr/bin/test -v newest-cni-168808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:55:55.718018  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.932804  288031 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.932932  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:55:55.932947  288031 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 167.936µs
	I1210 06:55:55.932957  288031 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:55:55.932978  288031 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933015  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:55:55.933025  288031 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 53.498µs
	I1210 06:55:55.933032  288031 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933044  288031 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933075  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:55:55.933085  288031 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 42.708µs
	I1210 06:55:55.933092  288031 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933106  288031 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933143  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:55:55.933152  288031 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 47.762µs
	I1210 06:55:55.933164  288031 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933176  288031 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933206  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:55:55.933216  288031 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 41.01µs
	I1210 06:55:55.933228  288031 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933236  288031 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933268  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:55:55.933277  288031 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.945µs
	I1210 06:55:55.933283  288031 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:55:55.933292  288031 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933320  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:55:55.933328  288031 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 37.703µs
	I1210 06:55:55.933334  288031 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:55:55.933343  288031 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933369  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:55:55.933381  288031 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.287µs
	I1210 06:55:55.933387  288031 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:55:55.933393  288031 cache.go:87] Successfully saved all images to host disk.
	I1210 06:55:56.133246  288031 oci.go:107] Successfully prepared a docker volume newest-cni-168808
	I1210 06:55:56.133310  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	W1210 06:55:56.133458  288031 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 06:55:56.133555  288031 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:55:56.190219  288031 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-168808 --name newest-cni-168808 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-168808 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-168808 --network newest-cni-168808 --ip 192.168.76.2 --volume newest-cni-168808:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:55:56.510233  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Running}}
	I1210 06:55:56.532276  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:56.559120  288031 cli_runner.go:164] Run: docker exec newest-cni-168808 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:55:56.616474  288031 oci.go:144] the created container "newest-cni-168808" has a running status.
	I1210 06:55:56.616510  288031 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa...
	I1210 06:55:56.920989  288031 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:55:56.944042  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:56.969366  288031 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:55:56.969535  288031 kic_runner.go:114] Args: [docker exec --privileged newest-cni-168808 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:55:57.033434  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:57.058007  288031 machine.go:94] provisionDockerMachine start ...
	I1210 06:55:57.058103  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:55:57.089237  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:55:57.089566  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:55:57.089575  288031 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:55:57.090220  288031 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58770->127.0.0.1:33093: read: connection reset by peer
	I1210 06:56:00.364112  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 06:56:00.364135  288031 ubuntu.go:182] provisioning hostname "newest-cni-168808"
	I1210 06:56:00.364212  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.456773  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:56:00.457119  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:56:00.457133  288031 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-168808 && echo "newest-cni-168808" | sudo tee /etc/hostname
	I1210 06:56:00.645316  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 06:56:00.645407  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.664033  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:56:00.664382  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:56:00.664404  288031 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-168808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-168808/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-168808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:56:00.815306  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:56:00.815331  288031 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 06:56:00.815364  288031 ubuntu.go:190] setting up certificates
	I1210 06:56:00.815372  288031 provision.go:84] configureAuth start
	I1210 06:56:00.815439  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:00.832798  288031 provision.go:143] copyHostCerts
	I1210 06:56:00.832883  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 06:56:00.832898  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 06:56:00.832975  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 06:56:00.833075  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 06:56:00.833087  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 06:56:00.833119  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 06:56:00.833186  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 06:56:00.833196  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 06:56:00.833222  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 06:56:00.833276  288031 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-168808 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-168808]
	I1210 06:56:00.918781  288031 provision.go:177] copyRemoteCerts
	I1210 06:56:00.919089  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:56:00.919173  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.937214  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.043240  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:56:01.061326  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:56:01.079140  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:56:01.096712  288031 provision.go:87] duration metric: took 281.317584ms to configureAuth
	I1210 06:56:01.096743  288031 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:56:01.096994  288031 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:56:01.097006  288031 machine.go:97] duration metric: took 4.038973217s to provisionDockerMachine
	I1210 06:56:01.097025  288031 client.go:176] duration metric: took 5.699623594s to LocalClient.Create
	I1210 06:56:01.097050  288031 start.go:167] duration metric: took 5.699693115s to libmachine.API.Create "newest-cni-168808"
	I1210 06:56:01.097057  288031 start.go:293] postStartSetup for "newest-cni-168808" (driver="docker")
	I1210 06:56:01.097073  288031 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:56:01.097147  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:56:01.097204  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.117411  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.225094  288031 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:56:01.228823  288031 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:56:01.228858  288031 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:56:01.228870  288031 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 06:56:01.228945  288031 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 06:56:01.229044  288031 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 06:56:01.229154  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:56:01.237207  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:56:01.255822  288031 start.go:296] duration metric: took 158.728391ms for postStartSetup
	I1210 06:56:01.256262  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:01.275219  288031 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 06:56:01.275529  288031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:56:01.275586  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.293397  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.396136  288031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:56:01.401043  288031 start.go:128] duration metric: took 6.00734179s to createHost
	I1210 06:56:01.401068  288031 start.go:83] releasing machines lock for "newest-cni-168808", held for 6.007509906s
	I1210 06:56:01.401140  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:01.417888  288031 ssh_runner.go:195] Run: cat /version.json
	I1210 06:56:01.417948  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.418253  288031 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:56:01.418318  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.442401  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.449051  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.632926  288031 ssh_runner.go:195] Run: systemctl --version
	I1210 06:56:01.640549  288031 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:56:01.645141  288031 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:56:01.645218  288031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:56:01.673901  288031 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 06:56:01.673935  288031 start.go:496] detecting cgroup driver to use...
	I1210 06:56:01.673969  288031 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:56:01.674032  288031 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:56:01.689298  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:56:01.702121  288031 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:56:01.702192  288031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:56:01.720186  288031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:56:01.738710  288031 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:56:01.852215  288031 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:56:01.989095  288031 docker.go:234] disabling docker service ...
	I1210 06:56:01.989232  288031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:56:02.016451  288031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:56:02.030687  288031 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:56:02.153586  288031 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:56:02.280278  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:56:02.293652  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:56:02.308576  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:02.458303  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:56:02.467239  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:56:02.475789  288031 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:56:02.475860  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:56:02.484995  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:56:02.493944  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:56:02.503478  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:56:02.512024  288031 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:56:02.520354  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:56:02.529401  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:56:02.538409  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:56:02.548300  288031 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:56:02.556042  288031 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:56:02.563716  288031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:56:02.677702  288031 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:56:02.766228  288031 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:56:02.766303  288031 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:56:02.770737  288031 start.go:564] Will wait 60s for crictl version
	I1210 06:56:02.770834  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:02.775190  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:56:02.800314  288031 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:56:02.800416  288031 ssh_runner.go:195] Run: containerd --version
	I1210 06:56:02.821570  288031 ssh_runner.go:195] Run: containerd --version
	I1210 06:56:02.847675  288031 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 06:56:02.850751  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:56:02.867882  288031 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:56:02.871991  288031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:56:02.885356  288031 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:56:02.888273  288031 kubeadm.go:884] updating cluster {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:56:02.888501  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.049684  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.199179  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.344408  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:56:03.344500  288031 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:56:03.372099  288031 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 06:56:03.372123  288031 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:56:03.372188  288031 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:03.372216  288031 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.372401  288031 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.372426  288031 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.372484  288031 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.372525  288031 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.372561  288031 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.372197  288031 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.374594  288031 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.374594  288031 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.374671  288031 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.374725  288031 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.374874  288031 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.374973  288031 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:03.374986  288031 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.375071  288031 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.727178  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1210 06:56:03.727250  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.731066  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1210 06:56:03.731131  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.735451  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1210 06:56:03.735512  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.736230  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1210 06:56:03.736288  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.743134  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 06:56:03.743203  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 06:56:03.749746  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1210 06:56:03.749821  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.753657  288031 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1210 06:56:03.753695  288031 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.753742  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.773282  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1210 06:56:03.773355  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.790557  288031 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 06:56:03.790597  288031 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.790644  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.790733  288031 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1210 06:56:03.790752  288031 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.790779  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.799555  288031 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1210 06:56:03.799644  288031 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.799725  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.806996  288031 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 06:56:03.807106  288031 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.807186  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.814114  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.814221  288031 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1210 06:56:03.814280  288031 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.814358  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.826776  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.826945  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.827124  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.827225  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:03.827327  288031 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1210 06:56:03.827372  288031 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.827436  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.903162  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.903368  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.906563  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.906718  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.906821  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.906908  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.907050  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:04.003323  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:04.003515  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:04.011136  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:04.011298  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:04.011413  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:04.011544  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:04.011642  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:04.089211  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:04.089350  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 06:56:04.089480  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:04.134911  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1210 06:56:04.135033  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:04.135102  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 06:56:04.135154  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:56:04.135223  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:04.135271  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 06:56:04.135322  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:04.135372  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:04.135418  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:04.155745  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.155780  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1210 06:56:04.155836  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:04.155928  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:04.221987  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.222077  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1210 06:56:04.222179  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 06:56:04.222222  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1210 06:56:04.222311  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:56:04.222345  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 06:56:04.222453  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.222565  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.222646  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 06:56:04.222687  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 06:56:04.222775  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.222808  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1210 06:56:04.300685  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.300730  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1210 06:56:04.320496  288031 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:56:04.321128  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1210 06:56:04.472464  288031 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 06:56:04.472630  288031 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 06:56:04.472710  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.604775  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:56:04.616616  288031 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 06:56:04.616662  288031 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.616713  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:04.705496  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.795703  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.795789  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.834471  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:06.074424  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.27860408s)
	I1210 06:56:06.074538  288031 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.240038061s)
	I1210 06:56:06.074651  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:06.074744  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 06:56:06.074784  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:06.074841  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:06.117004  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:56:06.117113  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:07.020903  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 06:56:07.020935  288031 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:07.020987  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:07.021057  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:56:07.021071  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 06:56:08.105154  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.084144622s)
	I1210 06:56:08.105190  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 06:56:08.105213  288031 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:08.105277  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:09.435879  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.330576141s)
	I1210 06:56:09.435909  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 06:56:09.435927  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:09.435980  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:10.441205  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.005199332s)
	I1210 06:56:10.441234  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 06:56:10.441253  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:10.441308  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:11.471539  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.03020309s)
	I1210 06:56:11.471569  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 06:56:11.471585  288031 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:11.471630  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:11.808584  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:56:11.808617  288031 cache_images.go:125] Successfully loaded all cached images
	I1210 06:56:11.808624  288031 cache_images.go:94] duration metric: took 8.436487473s to LoadCachedImages
	I1210 06:56:11.808636  288031 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 06:56:11.808725  288031 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-168808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:56:11.808792  288031 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:56:11.836989  288031 cni.go:84] Creating CNI manager for ""
	I1210 06:56:11.837009  288031 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:56:11.837023  288031 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:56:11.837046  288031 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-168808 NodeName:newest-cni-168808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:56:11.837170  288031 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-168808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:56:11.837238  288031 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:56:11.845539  288031 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 06:56:11.845605  288031 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:56:11.853470  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1210 06:56:11.853499  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256
	I1210 06:56:11.853544  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:56:11.853564  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 06:56:11.853477  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:11.853636  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 06:56:11.870493  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 06:56:11.870518  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 06:56:11.870493  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 06:56:11.870541  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1210 06:56:11.870547  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1210 06:56:11.892072  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 06:56:11.892110  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1210 06:56:12.684721  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:56:12.692932  288031 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 06:56:12.706015  288031 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:56:12.719741  288031 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1210 06:56:12.733262  288031 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:56:12.737005  288031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:56:12.746629  288031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:56:12.858808  288031 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:56:12.875513  288031 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808 for IP: 192.168.76.2
	I1210 06:56:12.875541  288031 certs.go:195] generating shared ca certs ...
	I1210 06:56:12.875592  288031 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:12.875802  288031 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 06:56:12.875887  288031 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 06:56:12.875902  288031 certs.go:257] generating profile certs ...
	I1210 06:56:12.875985  288031 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key
	I1210 06:56:12.876002  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt with IP's: []
	I1210 06:56:13.076032  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt ...
	I1210 06:56:13.076068  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt: {Name:mkf7bb14938883b10d68a49b8ce34d3c2146efc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.076259  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key ...
	I1210 06:56:13.076271  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key: {Name:mk990176085bdcef2cd12b2c8873345669259230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.076363  288031 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb
	I1210 06:56:13.076378  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 06:56:13.460966  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb ...
	I1210 06:56:13.461005  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb: {Name:mk5f1859a12684f1b2417133b2abe5b0cc7114b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.461185  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb ...
	I1210 06:56:13.461201  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb: {Name:mk2fe3162e58fbb8aab1f63fc8fe494c68c7632e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.461286  288031 certs.go:382] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt
	I1210 06:56:13.461362  288031 certs.go:386] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key
	I1210 06:56:13.461420  288031 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key
	I1210 06:56:13.461442  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt with IP's: []
	I1210 06:56:13.583028  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt ...
	I1210 06:56:13.583055  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt: {Name:mk85677ff817d69f49f025f68ba6ab54589ffc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.583231  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key ...
	I1210 06:56:13.583244  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key: {Name:mke6a5c0bf07d17ef15ab36a3c463f1af3ef2e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.583429  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 06:56:13.583478  288031 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 06:56:13.583491  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:56:13.583519  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:56:13.583547  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:56:13.583575  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 06:56:13.583632  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:56:13.584222  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:56:13.602582  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:56:13.622006  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:56:13.639862  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 06:56:13.658651  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:56:13.680241  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:56:13.700023  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:56:13.719444  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:56:13.736929  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 06:56:13.754184  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:56:13.772309  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 06:56:13.789835  288031 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:56:13.801999  288031 ssh_runner.go:195] Run: openssl version
	I1210 06:56:13.808616  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.815940  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 06:56:13.823193  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.826846  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.826907  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.867540  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:56:13.875137  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:56:13.882628  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.890295  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:56:13.898236  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.902139  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.902206  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.945638  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:56:13.954270  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:56:13.962740  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.971630  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 06:56:13.979227  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.983241  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.983361  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 06:56:14.024714  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:56:14.032691  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0
	I1210 06:56:14.040565  288031 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:56:14.044474  288031 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:56:14.044584  288031 kubeadm.go:401] StartCluster: {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:56:14.044664  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:56:14.044727  288031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:56:14.070428  288031 cri.go:89] found id: ""
	I1210 06:56:14.070496  288031 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:56:14.078638  288031 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:56:14.086602  288031 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:56:14.086714  288031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:56:14.094816  288031 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:56:14.094840  288031 kubeadm.go:158] found existing configuration files:
	
	I1210 06:56:14.094921  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:56:14.102760  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:56:14.102835  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:56:14.110132  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:56:14.117992  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:56:14.118105  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:56:14.125816  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:56:14.133574  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:56:14.133680  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:56:14.141074  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:56:14.148896  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:56:14.148967  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:56:14.156718  288031 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:56:14.194063  288031 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:56:14.194238  288031 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:56:14.263671  288031 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:56:14.263788  288031 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:56:14.263850  288031 kubeadm.go:319] OS: Linux
	I1210 06:56:14.263931  288031 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:56:14.264002  288031 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:56:14.264081  288031 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:56:14.264151  288031 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:56:14.264228  288031 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:56:14.264299  288031 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:56:14.264372  288031 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:56:14.264442  288031 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:56:14.264516  288031 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:56:14.342503  288031 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:56:14.342615  288031 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:56:14.342711  288031 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:56:14.355434  288031 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:56:14.365012  288031 out.go:252]   - Generating certificates and keys ...
	I1210 06:56:14.365181  288031 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:56:14.365286  288031 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:56:14.676353  288031 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:56:14.776617  288031 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:56:14.831643  288031 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:56:15.344970  288031 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:56:15.738235  288031 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:56:15.738572  288031 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:56:15.867481  288031 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:56:15.867849  288031 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:56:16.524781  288031 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:56:16.857089  288031 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:56:17.277023  288031 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:56:17.277264  288031 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:56:17.403345  288031 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:56:17.551288  288031 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:56:17.791106  288031 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:56:17.963150  288031 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:56:18.214947  288031 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:56:18.216045  288031 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:56:18.219851  288031 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:56:18.238517  288031 out.go:252]   - Booting up control plane ...
	I1210 06:56:18.238649  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:56:18.238733  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:56:18.238803  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:56:18.250848  288031 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:56:18.250999  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:56:18.258800  288031 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:56:18.259935  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:56:18.260158  288031 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:56:18.423681  288031 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:56:18.423807  288031 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:00:18.423768  288031 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000380171s
	I1210 07:00:18.423796  288031 kubeadm.go:319] 
	I1210 07:00:18.424248  288031 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:00:18.424332  288031 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:00:18.424690  288031 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:00:18.424700  288031 kubeadm.go:319] 
	I1210 07:00:18.424910  288031 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:00:18.424973  288031 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:00:18.425276  288031 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:00:18.425286  288031 kubeadm.go:319] 
	I1210 07:00:18.430059  288031 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:00:18.430830  288031 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:00:18.430957  288031 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:00:18.431231  288031 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:00:18.431244  288031 kubeadm.go:319] 
	I1210 07:00:18.431500  288031 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:00:18.431504  288031 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000380171s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000380171s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:00:18.431582  288031 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:00:18.843096  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:00:18.856261  288031 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:00:18.856329  288031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:00:18.864319  288031 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:00:18.864336  288031 kubeadm.go:158] found existing configuration files:
	
	I1210 07:00:18.864386  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:00:18.872311  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:00:18.872378  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:00:18.880473  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:00:18.888809  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:00:18.888898  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:00:18.896694  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:00:18.904593  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:00:18.904713  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:00:18.912542  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:00:18.920717  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:00:18.920789  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:00:18.928124  288031 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:00:18.967512  288031 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:00:18.967907  288031 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:00:19.041388  288031 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:00:19.041560  288031 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:00:19.041615  288031 kubeadm.go:319] OS: Linux
	I1210 07:00:19.041688  288031 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:00:19.041765  288031 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:00:19.041839  288031 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:00:19.041914  288031 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:00:19.041993  288031 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:00:19.042098  288031 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:00:19.042164  288031 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:00:19.042294  288031 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:00:19.042373  288031 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:00:19.108959  288031 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:00:19.109206  288031 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:00:19.109320  288031 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:00:19.119464  288031 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:00:19.124812  288031 out.go:252]   - Generating certificates and keys ...
	I1210 07:00:19.125035  288031 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:00:19.125167  288031 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:00:19.125319  288031 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:00:19.125475  288031 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:00:19.125720  288031 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:00:19.125904  288031 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:00:19.126029  288031 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:00:19.126109  288031 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:00:19.126199  288031 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:00:19.126302  288031 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:00:19.126351  288031 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:00:19.126419  288031 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:00:19.602744  288031 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:00:19.748510  288031 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:00:19.958702  288031 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:00:20.047566  288031 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:00:20.269067  288031 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:00:20.269683  288031 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:00:20.272343  288031 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:00:20.275537  288031 out.go:252]   - Booting up control plane ...
	I1210 07:00:20.275663  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:00:20.275769  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:00:20.275866  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:00:20.294928  288031 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:00:20.295378  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:00:20.304384  288031 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:00:20.304493  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:00:20.305348  288031 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:00:20.437669  288031 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:00:20.437797  288031 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:04:20.438913  288031 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001234655s
	I1210 07:04:20.438947  288031 kubeadm.go:319] 
	I1210 07:04:20.439199  288031 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:04:20.439384  288031 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:04:20.439577  288031 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:04:20.439588  288031 kubeadm.go:319] 
	I1210 07:04:20.439880  288031 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:04:20.439939  288031 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:04:20.439994  288031 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:04:20.440000  288031 kubeadm.go:319] 
	I1210 07:04:20.444885  288031 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:04:20.445319  288031 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:04:20.445433  288031 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:04:20.445673  288031 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:04:20.445684  288031 kubeadm.go:319] 
	I1210 07:04:20.445752  288031 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:04:20.445817  288031 kubeadm.go:403] duration metric: took 8m6.40123863s to StartCluster
	I1210 07:04:20.445855  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:04:20.445921  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:04:20.470269  288031 cri.go:89] found id: ""
	I1210 07:04:20.470308  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.470316  288031 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:04:20.470323  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:04:20.470390  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:04:20.495234  288031 cri.go:89] found id: ""
	I1210 07:04:20.495265  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.495274  288031 logs.go:284] No container was found matching "etcd"
	I1210 07:04:20.495280  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:04:20.495373  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:04:20.521061  288031 cri.go:89] found id: ""
	I1210 07:04:20.521084  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.521093  288031 logs.go:284] No container was found matching "coredns"
	I1210 07:04:20.521099  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:04:20.521177  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:04:20.545895  288031 cri.go:89] found id: ""
	I1210 07:04:20.545918  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.545927  288031 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:04:20.545934  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:04:20.545990  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:04:20.570266  288031 cri.go:89] found id: ""
	I1210 07:04:20.570288  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.570297  288031 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:04:20.570303  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:04:20.570392  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:04:20.594282  288031 cri.go:89] found id: ""
	I1210 07:04:20.594304  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.594312  288031 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:04:20.594319  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:04:20.594383  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:04:20.618464  288031 cri.go:89] found id: ""
	I1210 07:04:20.618493  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.618501  288031 logs.go:284] No container was found matching "kindnet"
	I1210 07:04:20.618511  288031 logs.go:123] Gathering logs for containerd ...
	I1210 07:04:20.618538  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:04:20.660630  288031 logs.go:123] Gathering logs for container status ...
	I1210 07:04:20.660704  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:04:20.699139  288031 logs.go:123] Gathering logs for kubelet ...
	I1210 07:04:20.699162  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:04:20.761847  288031 logs.go:123] Gathering logs for dmesg ...
	I1210 07:04:20.761880  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:04:20.775451  288031 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:04:20.775481  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:04:20.841106  288031 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:04:20.833391    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.834229    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.835767    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.836254    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.837830    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:04:20.833391    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.834229    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.835767    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.836254    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.837830    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:04:20.841129  288031 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234655s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:04:20.841183  288031 out.go:285] * 
	* 
	W1210 07:04:20.841248  288031 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234655s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234655s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:04:20.841261  288031 out.go:285] * 
	* 
	W1210 07:04:20.843675  288031 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:04:20.850638  288031 out.go:203] 
	W1210 07:04:20.853450  288031 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234655s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234655s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:04:20.853494  288031 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:04:20.853520  288031 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:04:20.856600  288031 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-168808
helpers_test.go:244: (dbg) docker inspect newest-cni-168808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3",
	        "Created": "2025-12-10T06:55:56.205654512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288372,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:55:56.278762999Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/hosts",
	        "LogPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3-json.log",
	        "Name": "/newest-cni-168808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-168808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-168808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3",
	                "LowerDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-168808",
	                "Source": "/var/lib/docker/volumes/newest-cni-168808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-168808",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-168808",
	                "name.minikube.sigs.k8s.io": "newest-cni-168808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8dc8a0bd8d67f970fd6ee9f5185b3999f597162904a68c34b61526eb2bb5352e",
	            "SandboxKey": "/var/run/docker/netns/8dc8a0bd8d67",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-168808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:0a:53:b3:10:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fedd4ad26097ebf6757101ef8e22a141acd4ba740aa95d5f1eab7ffc232007f5",
	                    "EndpointID": "32d7243a0bf1738641a18a9cb935e90041c7084e02ec3035ddaf5ac35cf4ef4b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-168808",
	                        "7d1db3aa80a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808: exit status 6 (326.122061ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:04:21.336716  301022 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-168808" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-168808 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-451123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:51 UTC │ 10 Dec 25 06:52 UTC │
	│ stop    │ -p embed-certs-451123 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-451123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ start   │ -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:53 UTC │
	│ image   │ embed-certs-451123 image list --format=json                                                                                                                                                                                                              │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ pause   │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ unpause │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p disable-driver-mounts-595993                                                                                                                                                                                                                          │ disable-driver-mounts-595993 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ stop    │ -p default-k8s-diff-port-395269 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-395269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:55 UTC │
	│ image   │ default-k8s-diff-port-395269 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ pause   │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ unpause │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-320236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │                     │
	│ stop    │ -p no-preload-320236 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ addons  │ enable dashboard -p no-preload-320236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ start   │ -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:00:31
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:00:31.606607  296020 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:00:31.606726  296020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:00:31.606763  296020 out.go:374] Setting ErrFile to fd 2...
	I1210 07:00:31.606781  296020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:00:31.607068  296020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 07:00:31.607446  296020 out.go:368] Setting JSON to false
	I1210 07:00:31.608351  296020 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6182,"bootTime":1765343850,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:00:31.608452  296020 start.go:143] virtualization:  
	I1210 07:00:31.611858  296020 out.go:179] * [no-preload-320236] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:00:31.616135  296020 notify.go:221] Checking for updates...
	I1210 07:00:31.616625  296020 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:00:31.619795  296020 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:00:31.622704  296020 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:00:31.625649  296020 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 07:00:31.628623  296020 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:00:31.632108  296020 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:00:31.635513  296020 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:00:31.636082  296020 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:00:31.668430  296020 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:00:31.668544  296020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:00:31.757341  296020 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:00:31.748329892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:00:31.757451  296020 docker.go:319] overlay module found
	I1210 07:00:31.760519  296020 out.go:179] * Using the docker driver based on existing profile
	I1210 07:00:31.763315  296020 start.go:309] selected driver: docker
	I1210 07:00:31.763332  296020 start.go:927] validating driver "docker" against &{Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:00:31.763427  296020 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:00:31.764155  296020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:00:31.816369  296020 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:00:31.807572299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:00:31.816697  296020 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:00:31.816729  296020 cni.go:84] Creating CNI manager for ""
	I1210 07:00:31.816780  296020 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:00:31.816827  296020 start.go:353] cluster config:
	{Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:00:31.820155  296020 out.go:179] * Starting "no-preload-320236" primary control-plane node in "no-preload-320236" cluster
	I1210 07:00:31.823065  296020 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:00:31.825850  296020 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:00:31.828615  296020 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:00:31.828709  296020 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:00:31.828754  296020 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json ...
	I1210 07:00:31.829080  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:31.848090  296020 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:00:31.848110  296020 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 07:00:31.848126  296020 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:00:31.848157  296020 start.go:360] acquireMachinesLock for no-preload-320236: {Name:mk4a67a43519a7e8fad4432e15b5aa1fee295390 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:31.848210  296020 start.go:364] duration metric: took 35.34µs to acquireMachinesLock for "no-preload-320236"
	I1210 07:00:31.848227  296020 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:00:31.848233  296020 fix.go:54] fixHost starting: 
	I1210 07:00:31.848495  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:31.871386  296020 fix.go:112] recreateIfNeeded on no-preload-320236: state=Stopped err=<nil>
	W1210 07:00:31.871423  296020 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:00:31.874767  296020 out.go:252] * Restarting existing docker container for "no-preload-320236" ...
	I1210 07:00:31.874868  296020 cli_runner.go:164] Run: docker start no-preload-320236
	I1210 07:00:32.009251  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:32.156909  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:32.181453  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:32.182795  296020 kic.go:430] container "no-preload-320236" state is running.
	I1210 07:00:32.183209  296020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 07:00:32.232417  296020 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json ...
	I1210 07:00:32.232635  296020 machine.go:94] provisionDockerMachine start ...
	I1210 07:00:32.232693  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:32.261256  296020 main.go:143] libmachine: Using SSH client type: native
	I1210 07:00:32.261589  296020 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 07:00:32.261598  296020 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:00:32.262750  296020 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:00:32.410295  296020 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410397  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:00:32.410406  296020 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.804µs
	I1210 07:00:32.410415  296020 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:00:32.410426  296020 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410466  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:00:32.410472  296020 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 47.402µs
	I1210 07:00:32.410478  296020 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410488  296020 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410538  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:00:32.410543  296020 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 57.051µs
	I1210 07:00:32.410550  296020 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410561  296020 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410587  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:00:32.410592  296020 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 32.222µs
	I1210 07:00:32.410597  296020 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410607  296020 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410641  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 07:00:32.410646  296020 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 40.46µs
	I1210 07:00:32.410652  296020 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410666  296020 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410699  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:00:32.410704  296020 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 44.333µs
	I1210 07:00:32.410709  296020 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:00:32.410718  296020 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410744  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 07:00:32.410748  296020 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 31.541µs
	I1210 07:00:32.410754  296020 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 07:00:32.410763  296020 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410800  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:00:32.410805  296020 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 43.233µs
	I1210 07:00:32.410810  296020 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:00:32.410817  296020 cache.go:87] Successfully saved all images to host disk.
	I1210 07:00:35.415945  296020 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-320236
	
	I1210 07:00:35.415969  296020 ubuntu.go:182] provisioning hostname "no-preload-320236"
	I1210 07:00:35.416031  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:35.439002  296020 main.go:143] libmachine: Using SSH client type: native
	I1210 07:00:35.439495  296020 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 07:00:35.439512  296020 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-320236 && echo "no-preload-320236" | sudo tee /etc/hostname
	I1210 07:00:35.600226  296020 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-320236
	
	I1210 07:00:35.600320  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:35.617143  296020 main.go:143] libmachine: Using SSH client type: native
	I1210 07:00:35.617452  296020 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 07:00:35.617472  296020 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320236/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:00:35.771609  296020 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:00:35.771638  296020 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 07:00:35.771682  296020 ubuntu.go:190] setting up certificates
	I1210 07:00:35.771771  296020 provision.go:84] configureAuth start
	I1210 07:00:35.771846  296020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 07:00:35.791167  296020 provision.go:143] copyHostCerts
	I1210 07:00:35.791247  296020 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 07:00:35.791260  296020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 07:00:35.791339  296020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 07:00:35.791446  296020 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 07:00:35.791457  296020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 07:00:35.791485  296020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 07:00:35.791558  296020 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 07:00:35.791566  296020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 07:00:35.791595  296020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 07:00:35.791661  296020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.no-preload-320236 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-320236]
	I1210 07:00:36.056131  296020 provision.go:177] copyRemoteCerts
	I1210 07:00:36.056213  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:00:36.056259  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.074420  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.179259  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:00:36.197688  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:00:36.220673  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:00:36.237968  296020 provision.go:87] duration metric: took 466.169895ms to configureAuth
	I1210 07:00:36.237995  296020 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:00:36.238191  296020 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:00:36.238203  296020 machine.go:97] duration metric: took 4.005560458s to provisionDockerMachine
	I1210 07:00:36.238212  296020 start.go:293] postStartSetup for "no-preload-320236" (driver="docker")
	I1210 07:00:36.238223  296020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:00:36.238275  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:00:36.238329  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.254857  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.358982  296020 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:00:36.362431  296020 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:00:36.362463  296020 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:00:36.362476  296020 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 07:00:36.362532  296020 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 07:00:36.362616  296020 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 07:00:36.362730  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:00:36.370123  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:00:36.387715  296020 start.go:296] duration metric: took 149.487982ms for postStartSetup
	I1210 07:00:36.387809  296020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:00:36.387850  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.404695  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.508174  296020 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:00:36.512870  296020 fix.go:56] duration metric: took 4.664630876s for fixHost
	I1210 07:00:36.512896  296020 start.go:83] releasing machines lock for "no-preload-320236", held for 4.664678434s
	I1210 07:00:36.512987  296020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 07:00:36.529627  296020 ssh_runner.go:195] Run: cat /version.json
	I1210 07:00:36.529680  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.529956  296020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:00:36.530021  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.556696  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.560591  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.658689  296020 ssh_runner.go:195] Run: systemctl --version
	I1210 07:00:36.753674  296020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:00:36.758001  296020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:00:36.758069  296020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:00:36.765538  296020 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:00:36.765576  296020 start.go:496] detecting cgroup driver to use...
	I1210 07:00:36.765607  296020 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:00:36.765653  296020 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:00:36.782605  296020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:00:36.796109  296020 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:00:36.796200  296020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:00:36.811318  296020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:00:36.824166  296020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:00:36.940162  296020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:00:37.067248  296020 docker.go:234] disabling docker service ...
	I1210 07:00:37.067375  296020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:00:37.082860  296020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:00:37.097077  296020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:00:37.210251  296020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:00:37.318500  296020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:00:37.331193  296020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:00:37.346030  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:37.491512  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:00:37.500237  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:00:37.508872  296020 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:00:37.508946  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:00:37.517510  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:00:37.526466  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:00:37.534915  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:00:37.543652  296020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:00:37.551699  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:00:37.560511  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:00:37.569071  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:00:37.577739  296020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:00:37.585320  296020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:00:37.592659  296020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:00:37.721273  296020 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:00:37.812117  296020 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:00:37.812183  296020 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:00:37.815932  296020 start.go:564] Will wait 60s for crictl version
	I1210 07:00:37.815991  296020 ssh_runner.go:195] Run: which crictl
	I1210 07:00:37.819381  296020 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:00:37.842923  296020 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:00:37.842993  296020 ssh_runner.go:195] Run: containerd --version
	I1210 07:00:37.862565  296020 ssh_runner.go:195] Run: containerd --version
	I1210 07:00:37.887310  296020 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 07:00:37.890224  296020 cli_runner.go:164] Run: docker network inspect no-preload-320236 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:00:37.905602  296020 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:00:37.909066  296020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:00:37.918252  296020 kubeadm.go:884] updating cluster {Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:00:37.918438  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:38.069274  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:38.216468  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:38.360305  296020 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:00:38.360402  296020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:00:38.384995  296020 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:00:38.385019  296020 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:00:38.385028  296020 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 07:00:38.385169  296020 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320236 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:00:38.385237  296020 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:00:38.412034  296020 cni.go:84] Creating CNI manager for ""
	I1210 07:00:38.412063  296020 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:00:38.412085  296020 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:00:38.412108  296020 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320236 NodeName:no-preload-320236 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:00:38.412227  296020 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-320236"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:00:38.412299  296020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:00:38.421091  296020 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:00:38.421163  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:00:38.429922  296020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 07:00:38.443653  296020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:00:38.457014  296020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:00:38.471955  296020 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:00:38.475882  296020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:00:38.485504  296020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:00:38.595895  296020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:00:38.612585  296020 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236 for IP: 192.168.85.2
	I1210 07:00:38.612609  296020 certs.go:195] generating shared ca certs ...
	I1210 07:00:38.612627  296020 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:38.612815  296020 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 07:00:38.612878  296020 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 07:00:38.612890  296020 certs.go:257] generating profile certs ...
	I1210 07:00:38.612999  296020 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.key
	I1210 07:00:38.613070  296020 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key.2faa2447
	I1210 07:00:38.613137  296020 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key
	I1210 07:00:38.613277  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 07:00:38.613326  296020 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 07:00:38.613338  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:00:38.613368  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:00:38.613404  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:00:38.613433  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 07:00:38.613490  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:00:38.614212  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:00:38.631972  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:00:38.649467  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:00:38.666377  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:00:38.686373  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:00:38.703781  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:00:38.723153  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:00:38.740812  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:00:38.758333  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 07:00:38.775839  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 07:00:38.793284  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:00:38.810326  296020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:00:38.822556  296020 ssh_runner.go:195] Run: openssl version
	I1210 07:00:38.829436  296020 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.836724  296020 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:00:38.844002  296020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.847779  296020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.847843  296020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.893925  296020 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:00:38.901463  296020 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.909031  296020 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 07:00:38.916756  296020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.920591  296020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.920655  296020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.962196  296020 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:00:38.969616  296020 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 07:00:38.976917  296020 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 07:00:38.984547  296020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 07:00:38.988142  296020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 07:00:38.988227  296020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 07:00:39.029601  296020 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:00:39.037081  296020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:00:39.040891  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:00:39.082809  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:00:39.123802  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:00:39.170233  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:00:39.211599  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:00:39.252658  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:00:39.293664  296020 kubeadm.go:401] StartCluster: {Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:00:39.293761  296020 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:00:39.293833  296020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:00:39.326465  296020 cri.go:89] found id: ""
	I1210 07:00:39.326535  296020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:00:39.334044  296020 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:00:39.334065  296020 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:00:39.334134  296020 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:00:39.341326  296020 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:00:39.341712  296020 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-320236" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:00:39.341813  296020 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-320236" cluster setting kubeconfig missing "no-preload-320236" context setting]
	I1210 07:00:39.342066  296020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:39.343566  296020 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:00:39.351071  296020 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 07:00:39.351101  296020 kubeadm.go:602] duration metric: took 17.030813ms to restartPrimaryControlPlane
	I1210 07:00:39.351110  296020 kubeadm.go:403] duration metric: took 57.455602ms to StartCluster
	I1210 07:00:39.351126  296020 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:39.351186  296020 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:00:39.351790  296020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:39.351984  296020 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:00:39.352290  296020 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:00:39.352337  296020 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:00:39.352428  296020 addons.go:70] Setting storage-provisioner=true in profile "no-preload-320236"
	I1210 07:00:39.352444  296020 addons.go:239] Setting addon storage-provisioner=true in "no-preload-320236"
	I1210 07:00:39.352451  296020 addons.go:70] Setting dashboard=true in profile "no-preload-320236"
	I1210 07:00:39.352465  296020 host.go:66] Checking if "no-preload-320236" exists ...
	I1210 07:00:39.352474  296020 addons.go:239] Setting addon dashboard=true in "no-preload-320236"
	W1210 07:00:39.352482  296020 addons.go:248] addon dashboard should already be in state true
	I1210 07:00:39.352506  296020 host.go:66] Checking if "no-preload-320236" exists ...
	I1210 07:00:39.352930  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.353043  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.353336  296020 addons.go:70] Setting default-storageclass=true in profile "no-preload-320236"
	I1210 07:00:39.353358  296020 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320236"
	I1210 07:00:39.353631  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.356443  296020 out.go:179] * Verifying Kubernetes components...
	I1210 07:00:39.359604  296020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:00:39.392662  296020 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:00:39.395653  296020 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:00:39.398571  296020 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:00:39.398592  296020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:00:39.398654  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:39.398779  296020 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:00:39.401749  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:00:39.401779  296020 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:00:39.401844  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:39.412459  296020 addons.go:239] Setting addon default-storageclass=true in "no-preload-320236"
	I1210 07:00:39.412502  296020 host.go:66] Checking if "no-preload-320236" exists ...
	I1210 07:00:39.412911  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.451209  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:39.451232  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:39.471156  296020 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:00:39.471176  296020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:00:39.471241  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:39.496650  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:39.601190  296020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:00:39.614141  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:00:39.645005  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:00:39.645028  296020 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:00:39.654222  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:00:39.665638  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:00:39.665659  296020 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:00:39.712904  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:00:39.712926  296020 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:00:39.726749  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:00:39.726772  296020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:00:39.740856  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:00:39.740877  296020 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:00:39.756673  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:00:39.756740  296020 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:00:39.769276  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:00:39.769343  296020 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:00:39.781575  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:00:39.781598  296020 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:00:39.794119  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:00:39.794141  296020 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:00:39.806448  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:40.411601  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.411697  296020 retry.go:31] will retry after 364.307231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:40.411787  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.411824  296020 retry.go:31] will retry after 175.448245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:40.412081  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.412126  296020 retry.go:31] will retry after 340.80415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.412177  296020 node_ready.go:35] waiting up to 6m0s for node "no-preload-320236" to be "Ready" ...
	I1210 07:00:40.587992  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:40.644838  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.644918  296020 retry.go:31] will retry after 280.859873ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.754069  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:00:40.776546  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:40.828821  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.828916  296020 retry.go:31] will retry after 208.166646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:40.845124  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.845178  296020 retry.go:31] will retry after 309.037844ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.926770  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:40.985165  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.985193  296020 retry.go:31] will retry after 576.96991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.037550  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:41.099191  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.099230  296020 retry.go:31] will retry after 760.269809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.154571  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:41.223133  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.223166  296020 retry.go:31] will retry after 384.5048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.563176  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:00:41.607812  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:41.634200  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.634229  296020 retry.go:31] will retry after 958.895789ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:41.670372  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.670408  296020 retry.go:31] will retry after 1.242104692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.860733  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:41.944937  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.944981  296020 retry.go:31] will retry after 1.203859969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:42.412917  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:42.594314  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:42.653050  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:42.653087  296020 retry.go:31] will retry after 1.019515228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:42.912735  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:42.992543  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:42.992575  296020 retry.go:31] will retry after 1.525694084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.149942  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:43.215395  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.215430  296020 retry.go:31] will retry after 1.081952772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.673229  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:43.753817  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.753847  296020 retry.go:31] will retry after 2.453351659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:44.297966  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:44.359469  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:44.359502  296020 retry.go:31] will retry after 2.437831877s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:44.413141  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:44.518419  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:44.578484  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:44.578514  296020 retry.go:31] will retry after 2.525951728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:46.207448  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:46.269857  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:46.269893  296020 retry.go:31] will retry after 2.493371842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:46.413377  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:46.798249  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:46.865016  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:46.865052  296020 retry.go:31] will retry after 1.595518707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:47.104732  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:47.167159  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:47.167199  296020 retry.go:31] will retry after 2.421365807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.461029  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:48.523416  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.523451  296020 retry.go:31] will retry after 5.045916415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.763783  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:48.826893  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.826925  296020 retry.go:31] will retry after 2.901964551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:48.913552  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:49.589035  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:49.649801  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:49.649838  296020 retry.go:31] will retry after 4.385171631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:50.913785  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:51.729508  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:51.789192  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:51.789222  296020 retry.go:31] will retry after 4.971484132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:53.412679  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:53.570118  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:53.628103  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:53.628135  296020 retry.go:31] will retry after 4.154709683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:54.035994  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:54.099925  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:54.099959  296020 retry.go:31] will retry after 5.104591633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:55.413548  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:56.761591  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:56.827407  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:56.827438  296020 retry.go:31] will retry after 6.353816854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:57.783555  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:57.845429  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:57.845462  296020 retry.go:31] will retry after 8.667848959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:57.912770  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:59.205067  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:59.264096  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:59.264126  296020 retry.go:31] will retry after 10.603627722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:59.912812  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:01.913336  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:03.181966  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:03.241570  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:03.241608  296020 retry.go:31] will retry after 19.837023952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:04.412784  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:06.413759  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:06.515688  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:06.581717  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:06.581752  296020 retry.go:31] will retry after 20.713933736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:08.913557  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:09.868219  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:01:09.930350  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:09.930381  296020 retry.go:31] will retry after 16.670877723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:11.413714  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:13.913676  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:16.413698  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:18.913533  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:21.413576  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:23.079136  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:23.142459  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:23.142490  296020 retry.go:31] will retry after 12.673593141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:23.913225  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:25.913289  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:26.601791  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:01:26.668541  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:26.668575  296020 retry.go:31] will retry after 21.28734842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:27.295978  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:27.360758  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:27.360795  296020 retry.go:31] will retry after 15.710281845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:27.913387  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:29.913460  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:31.913645  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:33.913718  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:35.816320  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:35.874198  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:35.874230  296020 retry.go:31] will retry after 21.376325369s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:36.412713  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:38.412808  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:40.913670  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:42.913819  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:43.072120  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:43.135982  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:43.136017  296020 retry.go:31] will retry after 16.570147181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:45.412747  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:47.913625  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:47.956911  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:01:48.019680  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:48.019731  296020 retry.go:31] will retry after 28.501835741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:49.913722  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:52.412735  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:54.913738  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:57.251364  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:57.311036  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:57.311129  296020 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:01:57.412814  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:59.413910  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:59.706314  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:59.768631  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:59.768667  296020 retry.go:31] will retry after 38.033263553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:02:01.912954  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:03.913786  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:06.413305  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:08.413743  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:10.913528  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:12.913703  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:15.412647  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:02:16.522068  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:02:16.598309  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:02:16.598419  296020 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:02:17.412833  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:19.413691  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:21.912889  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:24.412677  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:26.412851  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:28.413641  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:30.913357  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:32.913501  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:34.913596  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:37.412726  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:02:37.802376  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:02:37.868725  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:02:37.868813  296020 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:02:37.871969  296020 out.go:179] * Enabled addons: 
	I1210 07:02:37.875533  296020 addons.go:530] duration metric: took 1m58.523193068s for enable addons: enabled=[]
	W1210 07:02:39.412868  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:41.913602  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:43.913738  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:46.413639  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:48.913771  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:51.412644  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:53.413621  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:55.913703  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:57.913798  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:00.413728  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:02.912729  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:04.913512  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:06.913772  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:09.412627  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:11.412767  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:13.913613  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:15.913755  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:18.412757  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:20.413704  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:22.913503  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:24.913715  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:27.412717  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:29.412799  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:31.912720  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:33.913761  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:36.413520  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:38.413644  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:40.913728  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:43.412693  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:45.413603  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:47.413648  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:49.913012  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:52.413627  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:54.913602  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:57.413653  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:59.912671  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:01.913548  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:03.913688  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:06.413733  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:08.912736  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:10.913633  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:13.412730  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:15.413663  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:04:20.438913  288031 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001234655s
	I1210 07:04:20.438947  288031 kubeadm.go:319] 
	I1210 07:04:20.439199  288031 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:04:20.439384  288031 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:04:20.439577  288031 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:04:20.439588  288031 kubeadm.go:319] 
	I1210 07:04:20.439880  288031 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:04:20.439939  288031 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:04:20.439994  288031 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:04:20.440000  288031 kubeadm.go:319] 
	I1210 07:04:20.444885  288031 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:04:20.445319  288031 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:04:20.445433  288031 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:04:20.445673  288031 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:04:20.445684  288031 kubeadm.go:319] 
	I1210 07:04:20.445752  288031 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:04:20.445817  288031 kubeadm.go:403] duration metric: took 8m6.40123863s to StartCluster
	I1210 07:04:20.445855  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:04:20.445921  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:04:20.470269  288031 cri.go:89] found id: ""
	I1210 07:04:20.470308  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.470316  288031 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:04:20.470323  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:04:20.470390  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:04:20.495234  288031 cri.go:89] found id: ""
	I1210 07:04:20.495265  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.495274  288031 logs.go:284] No container was found matching "etcd"
	I1210 07:04:20.495280  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:04:20.495373  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:04:20.521061  288031 cri.go:89] found id: ""
	I1210 07:04:20.521084  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.521093  288031 logs.go:284] No container was found matching "coredns"
	I1210 07:04:20.521099  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:04:20.521177  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:04:20.545895  288031 cri.go:89] found id: ""
	I1210 07:04:20.545918  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.545927  288031 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:04:20.545934  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:04:20.545990  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:04:20.570266  288031 cri.go:89] found id: ""
	I1210 07:04:20.570288  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.570297  288031 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:04:20.570303  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:04:20.570392  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:04:20.594282  288031 cri.go:89] found id: ""
	I1210 07:04:20.594304  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.594312  288031 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:04:20.594319  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:04:20.594383  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:04:20.618464  288031 cri.go:89] found id: ""
	I1210 07:04:20.618493  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.618501  288031 logs.go:284] No container was found matching "kindnet"
	I1210 07:04:20.618511  288031 logs.go:123] Gathering logs for containerd ...
	I1210 07:04:20.618538  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:04:20.660630  288031 logs.go:123] Gathering logs for container status ...
	I1210 07:04:20.660704  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:04:20.699139  288031 logs.go:123] Gathering logs for kubelet ...
	I1210 07:04:20.699162  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:04:20.761847  288031 logs.go:123] Gathering logs for dmesg ...
	I1210 07:04:20.761880  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:04:20.775451  288031 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:04:20.775481  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:04:20.841106  288031 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:04:20.833391    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.834229    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.835767    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.836254    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.837830    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:04:20.833391    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.834229    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.835767    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.836254    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.837830    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:04:20.841129  288031 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234655s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:04:20.841183  288031 out.go:285] * 
	W1210 07:04:20.841248  288031 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234655s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:04:20.841261  288031 out.go:285] * 
	W1210 07:04:20.843675  288031 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:04:20.850638  288031 out.go:203] 
	W1210 07:04:20.853450  288031 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234655s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:04:20.853494  288031 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:04:20.853520  288031 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:04:20.856600  288031 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:56:06 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:06.074293751Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:07 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:07.010217823Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 10 06:56:07 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:07.012578615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 10 06:56:07 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:07.021299576Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:07 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:07.022077659Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:08 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:08.096856637Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 10 06:56:08 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:08.100315793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 10 06:56:08 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:08.108662047Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:08 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:08.109287489Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:09 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:09.423910237Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 10 06:56:09 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:09.426532520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 10 06:56:09 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:09.435123683Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:09 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:09.435763278Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:10 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:10.431875098Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 10 06:56:10 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:10.434111934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 10 06:56:10 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:10.441828882Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:10 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:10.442369950Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.465834077Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.466813179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.471098820Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.472460982Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.802275357Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.803292990Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.806681852Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.807174320Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:04:22.026179    5580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:22.026646    5580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:22.028280    5580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:22.028625    5580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:22.030173    5580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	
	
	==> kernel <==
	 07:04:22 up  1:46,  0 user,  load average: 0.54, 0.82, 1.52
	Linux newest-cni-168808 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:04:18 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:04:19 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 10 07:04:19 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:04:19 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:04:19 newest-cni-168808 kubelet[5383]: E1210 07:04:19.211150    5383 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:04:19 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:04:19 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:04:19 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 07:04:19 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:04:19 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:04:19 newest-cni-168808 kubelet[5389]: E1210 07:04:19.953492    5389 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:04:19 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:04:19 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:04:20 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 07:04:20 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:04:20 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:04:20 newest-cni-168808 kubelet[5455]: E1210 07:04:20.722650    5455 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:04:20 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:04:20 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:04:21 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 07:04:21 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:04:21 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:04:21 newest-cni-168808 kubelet[5499]: E1210 07:04:21.471875    5499 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:04:21 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:04:21 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808: exit status 6 (321.636557ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:04:22.561494  301242 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-168808" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-168808" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (507.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-320236 create -f testdata/busybox.yaml
E1210 06:58:37.012788    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-320236 create -f testdata/busybox.yaml: exit status 1 (57.035051ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-320236" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-320236 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-320236
helpers_test.go:244: (dbg) docker inspect no-preload-320236:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	        "Created": "2025-12-10T06:50:11.529745127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 266409,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:50:11.59482855Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hostname",
	        "HostsPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hosts",
	        "LogPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df-json.log",
	        "Name": "/no-preload-320236",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-320236:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-320236",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	                "LowerDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-320236",
	                "Source": "/var/lib/docker/volumes/no-preload-320236/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-320236",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-320236",
	                "name.minikube.sigs.k8s.io": "no-preload-320236",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85ae7e8702e41f92b33b5a42b651a54aa9c0e327b78652a75f1a51d370271f8b",
	            "SandboxKey": "/var/run/docker/netns/85ae7e8702e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-320236": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:07:05:69:57:da",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ffc647756423b4e81a1338ec5e1d5f1765ed1034ce5ae186c5fbcc84bf8cb09",
	                    "EndpointID": "d093b0e10fa0218a37c48573bc31f25266756d6a2b6d0253a5c740e71d806388",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-320236",
	                        "afeff1ab0e38"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236: exit status 6 (336.9655ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:58:37.373946  293011 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-320236" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-320236 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p old-k8s-version-806899                                                                                                                                                                                                                                │ old-k8s-version-806899       │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ start   │ -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-712093                                                                                                                                                                                                                             │ kubernetes-upgrade-712093    │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ start   │ -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-451123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:51 UTC │ 10 Dec 25 06:52 UTC │
	│ stop    │ -p embed-certs-451123 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-451123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ start   │ -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:53 UTC │
	│ image   │ embed-certs-451123 image list --format=json                                                                                                                                                                                                              │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ pause   │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ unpause │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p disable-driver-mounts-595993                                                                                                                                                                                                                          │ disable-driver-mounts-595993 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ stop    │ -p default-k8s-diff-port-395269 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-395269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:55 UTC │
	│ image   │ default-k8s-diff-port-395269 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ pause   │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ unpause │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:55:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:55:54.981794  288031 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:55:54.981926  288031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:54.981937  288031 out.go:374] Setting ErrFile to fd 2...
	I1210 06:55:54.981942  288031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:54.982225  288031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:55:54.982645  288031 out.go:368] Setting JSON to false
	I1210 06:55:54.983532  288031 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5905,"bootTime":1765343850,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:55:54.983604  288031 start.go:143] virtualization:  
	I1210 06:55:54.987589  288031 out.go:179] * [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:55:54.990952  288031 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:55:54.991143  288031 notify.go:221] Checking for updates...
	I1210 06:55:54.999718  288031 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:55:55.004245  288031 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:55:55.007947  288031 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:55:55.011263  288031 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:55:55.014567  288031 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:55:55.018346  288031 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:55:55.018474  288031 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:55:55.050040  288031 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:55:55.050159  288031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:55:55.110692  288031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:55:55.101413341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:55:55.110829  288031 docker.go:319] overlay module found
	I1210 06:55:55.114039  288031 out.go:179] * Using the docker driver based on user configuration
	I1210 06:55:55.116970  288031 start.go:309] selected driver: docker
	I1210 06:55:55.116990  288031 start.go:927] validating driver "docker" against <nil>
	I1210 06:55:55.117003  288031 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:55:55.117774  288031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:55:55.187658  288031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:55:55.175913019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:55:55.187828  288031 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 06:55:55.187862  288031 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 06:55:55.188080  288031 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:55:55.191065  288031 out.go:179] * Using Docker driver with root privileges
	I1210 06:55:55.193975  288031 cni.go:84] Creating CNI manager for ""
	I1210 06:55:55.194040  288031 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:55:55.194060  288031 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:55:55.194137  288031 start.go:353] cluster config:
	{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:55:55.197188  288031 out.go:179] * Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	I1210 06:55:55.199998  288031 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:55:55.202945  288031 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:55:55.205774  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:55:55.205946  288031 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:55:55.228535  288031 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:55:55.228555  288031 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:55:55.253626  288031 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 06:55:55.392999  288031 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 06:55:55.393221  288031 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 06:55:55.393258  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json: {Name:mke358d8c3878b6ccc086ae75b08bfbb6079278d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:55:55.393289  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.393417  288031 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:55:55.393461  288031 start.go:360] acquireMachinesLock for newest-cni-168808: {Name:mk3e9e7ddd89f37944cb01c45725514e76e5ba82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.393543  288031 start.go:364] duration metric: took 46.523µs to acquireMachinesLock for "newest-cni-168808"
	I1210 06:55:55.393571  288031 start.go:93] Provisioning new machine with config: &{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:55:55.393679  288031 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:55:55.397127  288031 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:55:55.397358  288031 start.go:159] libmachine.API.Create for "newest-cni-168808" (driver="docker")
	I1210 06:55:55.397385  288031 client.go:173] LocalClient.Create starting
	I1210 06:55:55.397438  288031 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem
	I1210 06:55:55.397479  288031 main.go:143] libmachine: Decoding PEM data...
	I1210 06:55:55.397497  288031 main.go:143] libmachine: Parsing certificate...
	I1210 06:55:55.397545  288031 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem
	I1210 06:55:55.397561  288031 main.go:143] libmachine: Decoding PEM data...
	I1210 06:55:55.397572  288031 main.go:143] libmachine: Parsing certificate...
	I1210 06:55:55.397949  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:55:55.421587  288031 cli_runner.go:211] docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:55:55.421662  288031 network_create.go:284] running [docker network inspect newest-cni-168808] to gather additional debugging logs...
	I1210 06:55:55.421680  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808
	W1210 06:55:55.440445  288031 cli_runner.go:211] docker network inspect newest-cni-168808 returned with exit code 1
	I1210 06:55:55.440476  288031 network_create.go:287] error running [docker network inspect newest-cni-168808]: docker network inspect newest-cni-168808: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-168808 not found
	I1210 06:55:55.440491  288031 network_create.go:289] output of [docker network inspect newest-cni-168808]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-168808 not found
	
	** /stderr **
	I1210 06:55:55.440592  288031 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:55:55.472278  288031 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d091f932c27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:e7:11:3f:d3:8a} reservation:<nil>}
	I1210 06:55:55.472550  288031 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6dea00dc193 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:ca:73:41:95:cc} reservation:<nil>}
	I1210 06:55:55.472849  288031 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ab13536e79d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:47:e5:9e:c6:4a} reservation:<nil>}
	I1210 06:55:55.473245  288031 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198fe00}
	I1210 06:55:55.473272  288031 network_create.go:124] attempt to create docker network newest-cni-168808 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 06:55:55.473327  288031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-168808 newest-cni-168808
	I1210 06:55:55.535150  288031 network_create.go:108] docker network newest-cni-168808 192.168.76.0/24 created
	I1210 06:55:55.535181  288031 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-168808" container
	I1210 06:55:55.535292  288031 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:55:55.551392  288031 cli_runner.go:164] Run: docker volume create newest-cni-168808 --label name.minikube.sigs.k8s.io=newest-cni-168808 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:55:55.554117  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.578140  288031 oci.go:103] Successfully created a docker volume newest-cni-168808
	I1210 06:55:55.578234  288031 cli_runner.go:164] Run: docker run --rm --name newest-cni-168808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-168808 --entrypoint /usr/bin/test -v newest-cni-168808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:55:55.718018  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.932804  288031 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.932932  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:55:55.932947  288031 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 167.936µs
	I1210 06:55:55.932957  288031 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:55:55.932978  288031 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933015  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:55:55.933025  288031 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 53.498µs
	I1210 06:55:55.933032  288031 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933044  288031 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933075  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:55:55.933085  288031 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 42.708µs
	I1210 06:55:55.933092  288031 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933106  288031 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933143  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:55:55.933152  288031 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 47.762µs
	I1210 06:55:55.933164  288031 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933176  288031 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933206  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:55:55.933216  288031 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 41.01µs
	I1210 06:55:55.933228  288031 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933236  288031 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933268  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:55:55.933277  288031 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.945µs
	I1210 06:55:55.933283  288031 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:55:55.933292  288031 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933320  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:55:55.933328  288031 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 37.703µs
	I1210 06:55:55.933334  288031 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:55:55.933343  288031 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933369  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:55:55.933381  288031 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.287µs
	I1210 06:55:55.933387  288031 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:55:55.933393  288031 cache.go:87] Successfully saved all images to host disk.
	I1210 06:55:56.133246  288031 oci.go:107] Successfully prepared a docker volume newest-cni-168808
	I1210 06:55:56.133310  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	W1210 06:55:56.133458  288031 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 06:55:56.133555  288031 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:55:56.190219  288031 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-168808 --name newest-cni-168808 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-168808 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-168808 --network newest-cni-168808 --ip 192.168.76.2 --volume newest-cni-168808:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:55:56.510233  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Running}}
	I1210 06:55:56.532276  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:56.559120  288031 cli_runner.go:164] Run: docker exec newest-cni-168808 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:55:56.616474  288031 oci.go:144] the created container "newest-cni-168808" has a running status.
	I1210 06:55:56.616510  288031 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa...
	I1210 06:55:56.920989  288031 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:55:56.944042  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:56.969366  288031 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:55:56.969535  288031 kic_runner.go:114] Args: [docker exec --privileged newest-cni-168808 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:55:57.033434  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:57.058007  288031 machine.go:94] provisionDockerMachine start ...
	I1210 06:55:57.058103  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:55:57.089237  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:55:57.089566  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:55:57.089575  288031 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:55:57.090220  288031 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58770->127.0.0.1:33093: read: connection reset by peer
	I1210 06:56:00.364112  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 06:56:00.364135  288031 ubuntu.go:182] provisioning hostname "newest-cni-168808"
	I1210 06:56:00.364212  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.456773  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:56:00.457119  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:56:00.457133  288031 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-168808 && echo "newest-cni-168808" | sudo tee /etc/hostname
	I1210 06:56:00.645316  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 06:56:00.645407  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.664033  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:56:00.664382  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:56:00.664404  288031 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-168808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-168808/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-168808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:56:00.815306  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:56:00.815331  288031 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 06:56:00.815364  288031 ubuntu.go:190] setting up certificates
	I1210 06:56:00.815372  288031 provision.go:84] configureAuth start
	I1210 06:56:00.815439  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:00.832798  288031 provision.go:143] copyHostCerts
	I1210 06:56:00.832883  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 06:56:00.832898  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 06:56:00.832975  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 06:56:00.833075  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 06:56:00.833087  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 06:56:00.833119  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 06:56:00.833186  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 06:56:00.833196  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 06:56:00.833222  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 06:56:00.833276  288031 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-168808 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-168808]
	I1210 06:56:00.918781  288031 provision.go:177] copyRemoteCerts
	I1210 06:56:00.919089  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:56:00.919173  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.937214  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.043240  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:56:01.061326  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:56:01.079140  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:56:01.096712  288031 provision.go:87] duration metric: took 281.317584ms to configureAuth
	I1210 06:56:01.096743  288031 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:56:01.096994  288031 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:56:01.097006  288031 machine.go:97] duration metric: took 4.038973217s to provisionDockerMachine
	I1210 06:56:01.097025  288031 client.go:176] duration metric: took 5.699623594s to LocalClient.Create
	I1210 06:56:01.097050  288031 start.go:167] duration metric: took 5.699693115s to libmachine.API.Create "newest-cni-168808"
	I1210 06:56:01.097057  288031 start.go:293] postStartSetup for "newest-cni-168808" (driver="docker")
	I1210 06:56:01.097073  288031 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:56:01.097147  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:56:01.097204  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.117411  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.225094  288031 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:56:01.228823  288031 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:56:01.228858  288031 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:56:01.228870  288031 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 06:56:01.228945  288031 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 06:56:01.229044  288031 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 06:56:01.229154  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:56:01.237207  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:56:01.255822  288031 start.go:296] duration metric: took 158.728391ms for postStartSetup
	I1210 06:56:01.256262  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:01.275219  288031 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 06:56:01.275529  288031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:56:01.275586  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.293397  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.396136  288031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:56:01.401043  288031 start.go:128] duration metric: took 6.00734179s to createHost
	I1210 06:56:01.401068  288031 start.go:83] releasing machines lock for "newest-cni-168808", held for 6.007509906s
	I1210 06:56:01.401140  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:01.417888  288031 ssh_runner.go:195] Run: cat /version.json
	I1210 06:56:01.417948  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.418253  288031 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:56:01.418318  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.442401  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.449051  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.632926  288031 ssh_runner.go:195] Run: systemctl --version
	I1210 06:56:01.640549  288031 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:56:01.645141  288031 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:56:01.645218  288031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:56:01.673901  288031 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 06:56:01.673935  288031 start.go:496] detecting cgroup driver to use...
	I1210 06:56:01.673969  288031 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:56:01.674032  288031 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:56:01.689298  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:56:01.702121  288031 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:56:01.702192  288031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:56:01.720186  288031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:56:01.738710  288031 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:56:01.852215  288031 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:56:01.989095  288031 docker.go:234] disabling docker service ...
	I1210 06:56:01.989232  288031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:56:02.016451  288031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:56:02.030687  288031 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:56:02.153586  288031 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:56:02.280278  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:56:02.293652  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:56:02.308576  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:02.458303  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:56:02.467239  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:56:02.475789  288031 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:56:02.475860  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:56:02.484995  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:56:02.493944  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:56:02.503478  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:56:02.512024  288031 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:56:02.520354  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:56:02.529401  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:56:02.538409  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:56:02.548300  288031 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:56:02.556042  288031 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:56:02.563716  288031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:56:02.677702  288031 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:56:02.766228  288031 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:56:02.766303  288031 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:56:02.770737  288031 start.go:564] Will wait 60s for crictl version
	I1210 06:56:02.770834  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:02.775190  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:56:02.800314  288031 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:56:02.800416  288031 ssh_runner.go:195] Run: containerd --version
	I1210 06:56:02.821570  288031 ssh_runner.go:195] Run: containerd --version
	I1210 06:56:02.847675  288031 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 06:56:02.850751  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:56:02.867882  288031 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:56:02.871991  288031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:56:02.885356  288031 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:56:02.888273  288031 kubeadm.go:884] updating cluster {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:56:02.888501  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.049684  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.199179  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.344408  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:56:03.344500  288031 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:56:03.372099  288031 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 06:56:03.372123  288031 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:56:03.372188  288031 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:03.372216  288031 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.372401  288031 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.372426  288031 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.372484  288031 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.372525  288031 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.372561  288031 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.372197  288031 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.374594  288031 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.374594  288031 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.374671  288031 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.374725  288031 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.374874  288031 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.374973  288031 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:03.374986  288031 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.375071  288031 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.727178  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1210 06:56:03.727250  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.731066  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1210 06:56:03.731131  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.735451  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1210 06:56:03.735512  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.736230  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1210 06:56:03.736288  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.743134  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 06:56:03.743203  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 06:56:03.749746  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1210 06:56:03.749821  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.753657  288031 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1210 06:56:03.753695  288031 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.753742  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.773282  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1210 06:56:03.773355  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.790557  288031 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 06:56:03.790597  288031 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.790644  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.790733  288031 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1210 06:56:03.790752  288031 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.790779  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.799555  288031 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1210 06:56:03.799644  288031 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.799725  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.806996  288031 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 06:56:03.807106  288031 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.807186  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.814114  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.814221  288031 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1210 06:56:03.814280  288031 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.814358  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.826776  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.826945  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.827124  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.827225  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:03.827327  288031 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1210 06:56:03.827372  288031 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.827436  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.903162  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.903368  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.906563  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.906718  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.906821  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.906908  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.907050  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:04.003323  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:04.003515  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:04.011136  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:04.011298  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:04.011413  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:04.011544  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:04.011642  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:04.089211  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:04.089350  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 06:56:04.089480  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:04.134911  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1210 06:56:04.135033  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:04.135102  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 06:56:04.135154  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:56:04.135223  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:04.135271  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 06:56:04.135322  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:04.135372  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:04.135418  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:04.155745  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.155780  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1210 06:56:04.155836  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:04.155928  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:04.221987  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.222077  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1210 06:56:04.222179  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 06:56:04.222222  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1210 06:56:04.222311  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:56:04.222345  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 06:56:04.222453  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.222565  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.222646  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 06:56:04.222687  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 06:56:04.222775  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.222808  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1210 06:56:04.300685  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.300730  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1210 06:56:04.320496  288031 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:56:04.321128  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1210 06:56:04.472464  288031 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 06:56:04.472630  288031 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 06:56:04.472710  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.604775  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:56:04.616616  288031 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 06:56:04.616662  288031 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.616713  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:04.705496  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.795703  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.795789  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.834471  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:06.074424  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.27860408s)
	I1210 06:56:06.074538  288031 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.240038061s)
	I1210 06:56:06.074651  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:06.074744  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 06:56:06.074784  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:06.074841  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:06.117004  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:56:06.117113  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:07.020903  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 06:56:07.020935  288031 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:07.020987  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:07.021057  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:56:07.021071  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 06:56:08.105154  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.084144622s)
	I1210 06:56:08.105190  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 06:56:08.105213  288031 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:08.105277  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:09.435879  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.330576141s)
	I1210 06:56:09.435909  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 06:56:09.435927  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:09.435980  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:10.441205  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.005199332s)
	I1210 06:56:10.441234  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 06:56:10.441253  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:10.441308  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:11.471539  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.03020309s)
	I1210 06:56:11.471569  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 06:56:11.471585  288031 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:11.471630  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:11.808584  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:56:11.808617  288031 cache_images.go:125] Successfully loaded all cached images
	I1210 06:56:11.808624  288031 cache_images.go:94] duration metric: took 8.436487473s to LoadCachedImages
	I1210 06:56:11.808636  288031 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 06:56:11.808725  288031 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-168808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:56:11.808792  288031 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:56:11.836989  288031 cni.go:84] Creating CNI manager for ""
	I1210 06:56:11.837009  288031 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:56:11.837023  288031 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:56:11.837046  288031 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-168808 NodeName:newest-cni-168808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:56:11.837170  288031 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-168808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:56:11.837238  288031 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:56:11.845539  288031 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 06:56:11.845605  288031 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:56:11.853470  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1210 06:56:11.853499  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256
	I1210 06:56:11.853544  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:56:11.853564  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 06:56:11.853477  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:11.853636  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 06:56:11.870493  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 06:56:11.870518  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 06:56:11.870493  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 06:56:11.870541  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1210 06:56:11.870547  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1210 06:56:11.892072  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 06:56:11.892110  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1210 06:56:12.684721  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:56:12.692932  288031 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 06:56:12.706015  288031 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:56:12.719741  288031 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1210 06:56:12.733262  288031 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:56:12.737005  288031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:56:12.746629  288031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:56:12.858808  288031 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:56:12.875513  288031 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808 for IP: 192.168.76.2
	I1210 06:56:12.875541  288031 certs.go:195] generating shared ca certs ...
	I1210 06:56:12.875592  288031 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:12.875802  288031 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 06:56:12.875887  288031 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 06:56:12.875902  288031 certs.go:257] generating profile certs ...
	I1210 06:56:12.875985  288031 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key
	I1210 06:56:12.876002  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt with IP's: []
	I1210 06:56:13.076032  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt ...
	I1210 06:56:13.076068  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt: {Name:mkf7bb14938883b10d68a49b8ce34d3c2146efc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.076259  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key ...
	I1210 06:56:13.076271  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key: {Name:mk990176085bdcef2cd12b2c8873345669259230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.076363  288031 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb
	I1210 06:56:13.076378  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 06:56:13.460966  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb ...
	I1210 06:56:13.461005  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb: {Name:mk5f1859a12684f1b2417133b2abe5b0cc7114b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.461185  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb ...
	I1210 06:56:13.461201  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb: {Name:mk2fe3162e58fbb8aab1f63fc8fe494c68c7632e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.461286  288031 certs.go:382] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt
	I1210 06:56:13.461362  288031 certs.go:386] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key
	I1210 06:56:13.461420  288031 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key
	I1210 06:56:13.461442  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt with IP's: []
	I1210 06:56:13.583028  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt ...
	I1210 06:56:13.583055  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt: {Name:mk85677ff817d69f49f025f68ba6ab54589ffc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.583231  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key ...
	I1210 06:56:13.583244  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key: {Name:mke6a5c0bf07d17ef15ab36a3c463f1af3ef2e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.583429  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 06:56:13.583478  288031 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 06:56:13.583491  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:56:13.583519  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:56:13.583547  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:56:13.583575  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 06:56:13.583632  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:56:13.584222  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:56:13.602582  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:56:13.622006  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:56:13.639862  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 06:56:13.658651  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:56:13.680241  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:56:13.700023  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:56:13.719444  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:56:13.736929  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 06:56:13.754184  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:56:13.772309  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 06:56:13.789835  288031 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:56:13.801999  288031 ssh_runner.go:195] Run: openssl version
	I1210 06:56:13.808616  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.815940  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 06:56:13.823193  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.826846  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.826907  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.867540  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:56:13.875137  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:56:13.882628  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.890295  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:56:13.898236  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.902139  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.902206  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.945638  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:56:13.954270  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:56:13.962740  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.971630  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 06:56:13.979227  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.983241  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.983361  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 06:56:14.024714  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:56:14.032691  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0
	I1210 06:56:14.040565  288031 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:56:14.044474  288031 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:56:14.044584  288031 kubeadm.go:401] StartCluster: {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:56:14.044664  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:56:14.044727  288031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:56:14.070428  288031 cri.go:89] found id: ""
	I1210 06:56:14.070496  288031 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:56:14.078638  288031 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:56:14.086602  288031 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:56:14.086714  288031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:56:14.094816  288031 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:56:14.094840  288031 kubeadm.go:158] found existing configuration files:
	
	I1210 06:56:14.094921  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:56:14.102760  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:56:14.102835  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:56:14.110132  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:56:14.117992  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:56:14.118105  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:56:14.125816  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:56:14.133574  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:56:14.133680  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:56:14.141074  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:56:14.148896  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:56:14.148967  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:56:14.156718  288031 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:56:14.194063  288031 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:56:14.194238  288031 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:56:14.263671  288031 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:56:14.263788  288031 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:56:14.263850  288031 kubeadm.go:319] OS: Linux
	I1210 06:56:14.263931  288031 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:56:14.264002  288031 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:56:14.264081  288031 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:56:14.264151  288031 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:56:14.264228  288031 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:56:14.264299  288031 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:56:14.264372  288031 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:56:14.264442  288031 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:56:14.264516  288031 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:56:14.342503  288031 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:56:14.342615  288031 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:56:14.342711  288031 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:56:14.355434  288031 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:56:14.365012  288031 out.go:252]   - Generating certificates and keys ...
	I1210 06:56:14.365181  288031 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:56:14.365286  288031 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:56:14.676353  288031 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:56:14.776617  288031 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:56:14.831643  288031 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:56:15.344970  288031 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:56:15.738235  288031 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:56:15.738572  288031 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:56:15.867481  288031 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:56:15.867849  288031 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:56:16.524781  288031 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:56:16.857089  288031 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:56:17.277023  288031 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:56:17.277264  288031 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:56:17.403345  288031 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:56:17.551288  288031 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:56:17.791106  288031 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:56:17.963150  288031 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:56:18.214947  288031 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:56:18.216045  288031 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:56:18.219851  288031 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:56:18.238517  288031 out.go:252]   - Booting up control plane ...
	I1210 06:56:18.238649  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:56:18.238733  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:56:18.238803  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:56:18.250848  288031 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:56:18.250999  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:56:18.258800  288031 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:56:18.259935  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:56:18.260158  288031 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:56:18.423681  288031 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:56:18.423807  288031 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:58:34.995507  266079 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000116079s
	I1210 06:58:34.995538  266079 kubeadm.go:319] 
	I1210 06:58:34.995597  266079 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:58:34.995631  266079 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:58:34.995735  266079 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:58:34.995740  266079 kubeadm.go:319] 
	I1210 06:58:34.995845  266079 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:58:34.995886  266079 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:58:34.995923  266079 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:58:34.995928  266079 kubeadm.go:319] 
	I1210 06:58:35.000052  266079 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:58:35.000496  266079 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:58:35.000614  266079 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:58:35.000866  266079 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:58:35.000872  266079 kubeadm.go:319] 
	I1210 06:58:35.000939  266079 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:58:35.001867  266079 kubeadm.go:403] duration metric: took 8m5.625012416s to StartCluster
	I1210 06:58:35.001964  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:58:35.002061  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:58:35.029739  266079 cri.go:89] found id: ""
	I1210 06:58:35.029800  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.029809  266079 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:58:35.029823  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:58:35.029903  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:58:35.059137  266079 cri.go:89] found id: ""
	I1210 06:58:35.059162  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.059171  266079 logs.go:284] No container was found matching "etcd"
	I1210 06:58:35.059177  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:58:35.059235  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:58:35.084571  266079 cri.go:89] found id: ""
	I1210 06:58:35.084597  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.084606  266079 logs.go:284] No container was found matching "coredns"
	I1210 06:58:35.084613  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:58:35.084678  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:58:35.113733  266079 cri.go:89] found id: ""
	I1210 06:58:35.113756  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.113765  266079 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:58:35.113772  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:58:35.113830  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:58:35.138121  266079 cri.go:89] found id: ""
	I1210 06:58:35.138147  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.138156  266079 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:58:35.138162  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:58:35.138219  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:58:35.164400  266079 cri.go:89] found id: ""
	I1210 06:58:35.164423  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.164432  266079 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:58:35.164438  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:58:35.164496  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:58:35.188393  266079 cri.go:89] found id: ""
	I1210 06:58:35.188416  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.188424  266079 logs.go:284] No container was found matching "kindnet"
	I1210 06:58:35.188434  266079 logs.go:123] Gathering logs for containerd ...
	I1210 06:58:35.188445  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:58:35.229460  266079 logs.go:123] Gathering logs for container status ...
	I1210 06:58:35.229497  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:58:35.258104  266079 logs.go:123] Gathering logs for kubelet ...
	I1210 06:58:35.258133  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:58:35.314798  266079 logs.go:123] Gathering logs for dmesg ...
	I1210 06:58:35.314833  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:58:35.327838  266079 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:58:35.327863  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:58:35.388749  266079 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:58:35.379754    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.380272    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.381844    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.382309    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.383666    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:58:35.379754    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.380272    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.381844    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.382309    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.383666    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 06:58:35.388774  266079 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:58:35.388804  266079 out.go:285] * 
	W1210 06:58:35.388856  266079 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:58:35.388874  266079 out.go:285] * 
	W1210 06:58:35.390983  266079 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:58:35.395719  266079 out.go:203] 
	W1210 06:58:35.397686  266079 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:58:35.397726  266079 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:58:35.397746  266079 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:58:35.401447  266079 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:50:21 no-preload-320236 containerd[758]: time="2025-12-10T06:50:21.196813280Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.210073933Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.212364720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.226922228Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.227913310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.535290347Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.537474679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.544644107Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.545322891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.456656579Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.458899750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.466570582Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.467486192Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.601587990Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.603772633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.613560498Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.614339090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.601365588Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.603910785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.611697236Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.612195825Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.983871691Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.986420408Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.993743905Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.994155757Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:58:38.053904    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:38.054698    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:38.056584    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:38.057121    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:38.058770    5671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	
	
	==> kernel <==
	 06:58:38 up  1:41,  0 user,  load average: 1.00, 1.53, 1.98
	Linux no-preload-320236 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:58:34 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:35 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 06:58:35 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:35 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:35 no-preload-320236 kubelet[5440]: E1210 06:58:35.721077    5440 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:35 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:35 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:36 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 06:58:36 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:36 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:36 no-preload-320236 kubelet[5537]: E1210 06:58:36.489326    5537 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:36 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:36 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:37 no-preload-320236 kubelet[5567]: E1210 06:58:37.243612    5567 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:37 no-preload-320236 kubelet[5662]: E1210 06:58:37.992109    5662 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236: exit status 6 (339.300043ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:58:38.519223  293242 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-320236" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-320236" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-320236
helpers_test.go:244: (dbg) docker inspect no-preload-320236:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	        "Created": "2025-12-10T06:50:11.529745127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 266409,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:50:11.59482855Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hostname",
	        "HostsPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hosts",
	        "LogPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df-json.log",
	        "Name": "/no-preload-320236",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-320236:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-320236",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	                "LowerDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-320236",
	                "Source": "/var/lib/docker/volumes/no-preload-320236/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-320236",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-320236",
	                "name.minikube.sigs.k8s.io": "no-preload-320236",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85ae7e8702e41f92b33b5a42b651a54aa9c0e327b78652a75f1a51d370271f8b",
	            "SandboxKey": "/var/run/docker/netns/85ae7e8702e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-320236": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:07:05:69:57:da",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ffc647756423b4e81a1338ec5e1d5f1765ed1034ce5ae186c5fbcc84bf8cb09",
	                    "EndpointID": "d093b0e10fa0218a37c48573bc31f25266756d6a2b6d0253a5c740e71d806388",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-320236",
	                        "afeff1ab0e38"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236: exit status 6 (326.235719ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:58:38.865781  293319 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-320236" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-320236 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p old-k8s-version-806899                                                                                                                                                                                                                                │ old-k8s-version-806899       │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ start   │ -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-712093                                                                                                                                                                                                                             │ kubernetes-upgrade-712093    │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ start   │ -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-451123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:51 UTC │ 10 Dec 25 06:52 UTC │
	│ stop    │ -p embed-certs-451123 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-451123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ start   │ -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:53 UTC │
	│ image   │ embed-certs-451123 image list --format=json                                                                                                                                                                                                              │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ pause   │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ unpause │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p disable-driver-mounts-595993                                                                                                                                                                                                                          │ disable-driver-mounts-595993 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ stop    │ -p default-k8s-diff-port-395269 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-395269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:55 UTC │
	│ image   │ default-k8s-diff-port-395269 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ pause   │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ unpause │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:55:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:55:54.981794  288031 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:55:54.981926  288031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:54.981937  288031 out.go:374] Setting ErrFile to fd 2...
	I1210 06:55:54.981942  288031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:54.982225  288031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:55:54.982645  288031 out.go:368] Setting JSON to false
	I1210 06:55:54.983532  288031 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5905,"bootTime":1765343850,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:55:54.983604  288031 start.go:143] virtualization:  
	I1210 06:55:54.987589  288031 out.go:179] * [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:55:54.990952  288031 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:55:54.991143  288031 notify.go:221] Checking for updates...
	I1210 06:55:54.999718  288031 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:55:55.004245  288031 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:55:55.007947  288031 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:55:55.011263  288031 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:55:55.014567  288031 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:55:55.018346  288031 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:55:55.018474  288031 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:55:55.050040  288031 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:55:55.050159  288031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:55:55.110692  288031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:55:55.101413341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:55:55.110829  288031 docker.go:319] overlay module found
	I1210 06:55:55.114039  288031 out.go:179] * Using the docker driver based on user configuration
	I1210 06:55:55.116970  288031 start.go:309] selected driver: docker
	I1210 06:55:55.116990  288031 start.go:927] validating driver "docker" against <nil>
	I1210 06:55:55.117003  288031 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:55:55.117774  288031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:55:55.187658  288031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:55:55.175913019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:55:55.187828  288031 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 06:55:55.187862  288031 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 06:55:55.188080  288031 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:55:55.191065  288031 out.go:179] * Using Docker driver with root privileges
	I1210 06:55:55.193975  288031 cni.go:84] Creating CNI manager for ""
	I1210 06:55:55.194040  288031 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:55:55.194060  288031 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:55:55.194137  288031 start.go:353] cluster config:
	{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:55:55.197188  288031 out.go:179] * Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	I1210 06:55:55.199998  288031 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:55:55.202945  288031 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:55:55.205774  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:55:55.205946  288031 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:55:55.228535  288031 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:55:55.228555  288031 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:55:55.253626  288031 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 06:55:55.392999  288031 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 06:55:55.393221  288031 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 06:55:55.393258  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json: {Name:mke358d8c3878b6ccc086ae75b08bfbb6079278d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:55:55.393289  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.393417  288031 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:55:55.393461  288031 start.go:360] acquireMachinesLock for newest-cni-168808: {Name:mk3e9e7ddd89f37944cb01c45725514e76e5ba82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.393543  288031 start.go:364] duration metric: took 46.523µs to acquireMachinesLock for "newest-cni-168808"
	I1210 06:55:55.393571  288031 start.go:93] Provisioning new machine with config: &{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:55:55.393679  288031 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:55:55.397127  288031 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:55:55.397358  288031 start.go:159] libmachine.API.Create for "newest-cni-168808" (driver="docker")
	I1210 06:55:55.397385  288031 client.go:173] LocalClient.Create starting
	I1210 06:55:55.397438  288031 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem
	I1210 06:55:55.397479  288031 main.go:143] libmachine: Decoding PEM data...
	I1210 06:55:55.397497  288031 main.go:143] libmachine: Parsing certificate...
	I1210 06:55:55.397545  288031 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem
	I1210 06:55:55.397561  288031 main.go:143] libmachine: Decoding PEM data...
	I1210 06:55:55.397572  288031 main.go:143] libmachine: Parsing certificate...
	I1210 06:55:55.397949  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:55:55.421587  288031 cli_runner.go:211] docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:55:55.421662  288031 network_create.go:284] running [docker network inspect newest-cni-168808] to gather additional debugging logs...
	I1210 06:55:55.421680  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808
	W1210 06:55:55.440445  288031 cli_runner.go:211] docker network inspect newest-cni-168808 returned with exit code 1
	I1210 06:55:55.440476  288031 network_create.go:287] error running [docker network inspect newest-cni-168808]: docker network inspect newest-cni-168808: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-168808 not found
	I1210 06:55:55.440491  288031 network_create.go:289] output of [docker network inspect newest-cni-168808]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-168808 not found
	
	** /stderr **
	I1210 06:55:55.440592  288031 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:55:55.472278  288031 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d091f932c27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:e7:11:3f:d3:8a} reservation:<nil>}
	I1210 06:55:55.472550  288031 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6dea00dc193 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:ca:73:41:95:cc} reservation:<nil>}
	I1210 06:55:55.472849  288031 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ab13536e79d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:47:e5:9e:c6:4a} reservation:<nil>}
	I1210 06:55:55.473245  288031 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198fe00}
	I1210 06:55:55.473272  288031 network_create.go:124] attempt to create docker network newest-cni-168808 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 06:55:55.473327  288031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-168808 newest-cni-168808
	I1210 06:55:55.535150  288031 network_create.go:108] docker network newest-cni-168808 192.168.76.0/24 created
	I1210 06:55:55.535181  288031 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-168808" container
	I1210 06:55:55.535292  288031 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:55:55.551392  288031 cli_runner.go:164] Run: docker volume create newest-cni-168808 --label name.minikube.sigs.k8s.io=newest-cni-168808 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:55:55.554117  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.578140  288031 oci.go:103] Successfully created a docker volume newest-cni-168808
	I1210 06:55:55.578234  288031 cli_runner.go:164] Run: docker run --rm --name newest-cni-168808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-168808 --entrypoint /usr/bin/test -v newest-cni-168808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:55:55.718018  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.932804  288031 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.932932  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:55:55.932947  288031 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 167.936µs
	I1210 06:55:55.932957  288031 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:55:55.932978  288031 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933015  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:55:55.933025  288031 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 53.498µs
	I1210 06:55:55.933032  288031 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933044  288031 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933075  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:55:55.933085  288031 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 42.708µs
	I1210 06:55:55.933092  288031 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933106  288031 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933143  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:55:55.933152  288031 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 47.762µs
	I1210 06:55:55.933164  288031 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933176  288031 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933206  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:55:55.933216  288031 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 41.01µs
	I1210 06:55:55.933228  288031 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933236  288031 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933268  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:55:55.933277  288031 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.945µs
	I1210 06:55:55.933283  288031 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:55:55.933292  288031 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933320  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:55:55.933328  288031 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 37.703µs
	I1210 06:55:55.933334  288031 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:55:55.933343  288031 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933369  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:55:55.933381  288031 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.287µs
	I1210 06:55:55.933387  288031 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:55:55.933393  288031 cache.go:87] Successfully saved all images to host disk.
	I1210 06:55:56.133246  288031 oci.go:107] Successfully prepared a docker volume newest-cni-168808
	I1210 06:55:56.133310  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	W1210 06:55:56.133458  288031 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 06:55:56.133555  288031 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:55:56.190219  288031 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-168808 --name newest-cni-168808 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-168808 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-168808 --network newest-cni-168808 --ip 192.168.76.2 --volume newest-cni-168808:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:55:56.510233  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Running}}
	I1210 06:55:56.532276  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:56.559120  288031 cli_runner.go:164] Run: docker exec newest-cni-168808 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:55:56.616474  288031 oci.go:144] the created container "newest-cni-168808" has a running status.
	I1210 06:55:56.616510  288031 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa...
	I1210 06:55:56.920989  288031 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:55:56.944042  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:56.969366  288031 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:55:56.969535  288031 kic_runner.go:114] Args: [docker exec --privileged newest-cni-168808 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:55:57.033434  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:57.058007  288031 machine.go:94] provisionDockerMachine start ...
	I1210 06:55:57.058103  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:55:57.089237  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:55:57.089566  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:55:57.089575  288031 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:55:57.090220  288031 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58770->127.0.0.1:33093: read: connection reset by peer
	I1210 06:56:00.364112  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 06:56:00.364135  288031 ubuntu.go:182] provisioning hostname "newest-cni-168808"
	I1210 06:56:00.364212  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.456773  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:56:00.457119  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:56:00.457133  288031 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-168808 && echo "newest-cni-168808" | sudo tee /etc/hostname
	I1210 06:56:00.645316  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 06:56:00.645407  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.664033  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:56:00.664382  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:56:00.664404  288031 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-168808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-168808/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-168808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:56:00.815306  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:56:00.815331  288031 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 06:56:00.815364  288031 ubuntu.go:190] setting up certificates
	I1210 06:56:00.815372  288031 provision.go:84] configureAuth start
	I1210 06:56:00.815439  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:00.832798  288031 provision.go:143] copyHostCerts
	I1210 06:56:00.832883  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 06:56:00.832898  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 06:56:00.832975  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 06:56:00.833075  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 06:56:00.833087  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 06:56:00.833119  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 06:56:00.833186  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 06:56:00.833196  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 06:56:00.833222  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 06:56:00.833276  288031 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-168808 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-168808]
	I1210 06:56:00.918781  288031 provision.go:177] copyRemoteCerts
	I1210 06:56:00.919089  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:56:00.919173  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.937214  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.043240  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:56:01.061326  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:56:01.079140  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:56:01.096712  288031 provision.go:87] duration metric: took 281.317584ms to configureAuth
	I1210 06:56:01.096743  288031 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:56:01.096994  288031 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:56:01.097006  288031 machine.go:97] duration metric: took 4.038973217s to provisionDockerMachine
	I1210 06:56:01.097025  288031 client.go:176] duration metric: took 5.699623594s to LocalClient.Create
	I1210 06:56:01.097050  288031 start.go:167] duration metric: took 5.699693115s to libmachine.API.Create "newest-cni-168808"
	I1210 06:56:01.097057  288031 start.go:293] postStartSetup for "newest-cni-168808" (driver="docker")
	I1210 06:56:01.097073  288031 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:56:01.097147  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:56:01.097204  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.117411  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.225094  288031 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:56:01.228823  288031 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:56:01.228858  288031 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:56:01.228870  288031 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 06:56:01.228945  288031 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 06:56:01.229044  288031 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 06:56:01.229154  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:56:01.237207  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:56:01.255822  288031 start.go:296] duration metric: took 158.728391ms for postStartSetup
	I1210 06:56:01.256262  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:01.275219  288031 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 06:56:01.275529  288031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:56:01.275586  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.293397  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.396136  288031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:56:01.401043  288031 start.go:128] duration metric: took 6.00734179s to createHost
	I1210 06:56:01.401068  288031 start.go:83] releasing machines lock for "newest-cni-168808", held for 6.007509906s
	I1210 06:56:01.401140  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:01.417888  288031 ssh_runner.go:195] Run: cat /version.json
	I1210 06:56:01.417948  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.418253  288031 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:56:01.418318  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.442401  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.449051  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.632926  288031 ssh_runner.go:195] Run: systemctl --version
	I1210 06:56:01.640549  288031 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:56:01.645141  288031 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:56:01.645218  288031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:56:01.673901  288031 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 06:56:01.673935  288031 start.go:496] detecting cgroup driver to use...
	I1210 06:56:01.673969  288031 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:56:01.674032  288031 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:56:01.689298  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:56:01.702121  288031 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:56:01.702192  288031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:56:01.720186  288031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:56:01.738710  288031 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:56:01.852215  288031 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:56:01.989095  288031 docker.go:234] disabling docker service ...
	I1210 06:56:01.989232  288031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:56:02.016451  288031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:56:02.030687  288031 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:56:02.153586  288031 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:56:02.280278  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:56:02.293652  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:56:02.308576  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:02.458303  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:56:02.467239  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:56:02.475789  288031 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:56:02.475860  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:56:02.484995  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:56:02.493944  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:56:02.503478  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:56:02.512024  288031 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:56:02.520354  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:56:02.529401  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:56:02.538409  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:56:02.548300  288031 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:56:02.556042  288031 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:56:02.563716  288031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:56:02.677702  288031 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:56:02.766228  288031 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:56:02.766303  288031 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:56:02.770737  288031 start.go:564] Will wait 60s for crictl version
	I1210 06:56:02.770834  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:02.775190  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:56:02.800314  288031 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:56:02.800416  288031 ssh_runner.go:195] Run: containerd --version
	I1210 06:56:02.821570  288031 ssh_runner.go:195] Run: containerd --version
	I1210 06:56:02.847675  288031 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 06:56:02.850751  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:56:02.867882  288031 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:56:02.871991  288031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:56:02.885356  288031 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:56:02.888273  288031 kubeadm.go:884] updating cluster {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:56:02.888501  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.049684  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.199179  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.344408  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:56:03.344500  288031 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:56:03.372099  288031 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 06:56:03.372123  288031 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:56:03.372188  288031 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:03.372216  288031 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.372401  288031 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.372426  288031 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.372484  288031 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.372525  288031 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.372561  288031 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.372197  288031 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.374594  288031 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.374594  288031 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.374671  288031 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.374725  288031 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.374874  288031 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.374973  288031 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:03.374986  288031 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.375071  288031 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.727178  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1210 06:56:03.727250  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.731066  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1210 06:56:03.731131  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.735451  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1210 06:56:03.735512  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.736230  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1210 06:56:03.736288  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.743134  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 06:56:03.743203  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 06:56:03.749746  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1210 06:56:03.749821  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.753657  288031 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1210 06:56:03.753695  288031 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.753742  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.773282  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1210 06:56:03.773355  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.790557  288031 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 06:56:03.790597  288031 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.790644  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.790733  288031 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1210 06:56:03.790752  288031 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.790779  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.799555  288031 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1210 06:56:03.799644  288031 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.799725  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.806996  288031 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 06:56:03.807106  288031 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.807186  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.814114  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.814221  288031 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1210 06:56:03.814280  288031 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.814358  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.826776  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.826945  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.827124  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.827225  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:03.827327  288031 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1210 06:56:03.827372  288031 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.827436  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.903162  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.903368  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.906563  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.906718  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.906821  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.906908  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.907050  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:04.003323  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:04.003515  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:04.011136  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:04.011298  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:04.011413  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:04.011544  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:04.011642  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:04.089211  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:04.089350  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 06:56:04.089480  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:04.134911  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1210 06:56:04.135033  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:04.135102  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 06:56:04.135154  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:56:04.135223  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:04.135271  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 06:56:04.135322  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:04.135372  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:04.135418  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:04.155745  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.155780  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1210 06:56:04.155836  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:04.155928  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:04.221987  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.222077  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1210 06:56:04.222179  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 06:56:04.222222  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1210 06:56:04.222311  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:56:04.222345  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 06:56:04.222453  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.222565  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.222646  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 06:56:04.222687  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 06:56:04.222775  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.222808  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1210 06:56:04.300685  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.300730  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1210 06:56:04.320496  288031 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:56:04.321128  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1210 06:56:04.472464  288031 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 06:56:04.472630  288031 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 06:56:04.472710  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.604775  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:56:04.616616  288031 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 06:56:04.616662  288031 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.616713  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:04.705496  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.795703  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.795789  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.834471  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:06.074424  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.27860408s)
	I1210 06:56:06.074538  288031 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.240038061s)
	I1210 06:56:06.074651  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:06.074744  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 06:56:06.074784  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:06.074841  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:06.117004  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:56:06.117113  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:07.020903  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 06:56:07.020935  288031 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:07.020987  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:07.021057  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:56:07.021071  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 06:56:08.105154  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.084144622s)
	I1210 06:56:08.105190  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 06:56:08.105213  288031 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:08.105277  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:09.435879  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.330576141s)
	I1210 06:56:09.435909  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 06:56:09.435927  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:09.435980  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:10.441205  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.005199332s)
	I1210 06:56:10.441234  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 06:56:10.441253  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:10.441308  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:11.471539  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.03020309s)
	I1210 06:56:11.471569  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 06:56:11.471585  288031 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:11.471630  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:11.808584  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:56:11.808617  288031 cache_images.go:125] Successfully loaded all cached images
	I1210 06:56:11.808624  288031 cache_images.go:94] duration metric: took 8.436487473s to LoadCachedImages
	I1210 06:56:11.808636  288031 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 06:56:11.808725  288031 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-168808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:56:11.808792  288031 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:56:11.836989  288031 cni.go:84] Creating CNI manager for ""
	I1210 06:56:11.837009  288031 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:56:11.837023  288031 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:56:11.837046  288031 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-168808 NodeName:newest-cni-168808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:56:11.837170  288031 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-168808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:56:11.837238  288031 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:56:11.845539  288031 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 06:56:11.845605  288031 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:56:11.853470  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1210 06:56:11.853499  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256
	I1210 06:56:11.853544  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:56:11.853564  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 06:56:11.853477  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:11.853636  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 06:56:11.870493  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 06:56:11.870518  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 06:56:11.870493  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 06:56:11.870541  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1210 06:56:11.870547  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1210 06:56:11.892072  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 06:56:11.892110  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1210 06:56:12.684721  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:56:12.692932  288031 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 06:56:12.706015  288031 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:56:12.719741  288031 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1210 06:56:12.733262  288031 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:56:12.737005  288031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:56:12.746629  288031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:56:12.858808  288031 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:56:12.875513  288031 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808 for IP: 192.168.76.2
	I1210 06:56:12.875541  288031 certs.go:195] generating shared ca certs ...
	I1210 06:56:12.875592  288031 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:12.875802  288031 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 06:56:12.875887  288031 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 06:56:12.875902  288031 certs.go:257] generating profile certs ...
	I1210 06:56:12.875985  288031 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key
	I1210 06:56:12.876002  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt with IP's: []
	I1210 06:56:13.076032  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt ...
	I1210 06:56:13.076068  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt: {Name:mkf7bb14938883b10d68a49b8ce34d3c2146efc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.076259  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key ...
	I1210 06:56:13.076271  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key: {Name:mk990176085bdcef2cd12b2c8873345669259230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.076363  288031 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb
	I1210 06:56:13.076378  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 06:56:13.460966  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb ...
	I1210 06:56:13.461005  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb: {Name:mk5f1859a12684f1b2417133b2abe5b0cc7114b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.461185  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb ...
	I1210 06:56:13.461201  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb: {Name:mk2fe3162e58fbb8aab1f63fc8fe494c68c7632e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.461286  288031 certs.go:382] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt
	I1210 06:56:13.461362  288031 certs.go:386] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key
	I1210 06:56:13.461420  288031 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key
	I1210 06:56:13.461442  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt with IP's: []
	I1210 06:56:13.583028  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt ...
	I1210 06:56:13.583055  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt: {Name:mk85677ff817d69f49f025f68ba6ab54589ffc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.583231  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key ...
	I1210 06:56:13.583244  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key: {Name:mke6a5c0bf07d17ef15ab36a3c463f1af3ef2e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.583429  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 06:56:13.583478  288031 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 06:56:13.583491  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:56:13.583519  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:56:13.583547  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:56:13.583575  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 06:56:13.583632  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:56:13.584222  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:56:13.602582  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:56:13.622006  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:56:13.639862  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 06:56:13.658651  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:56:13.680241  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:56:13.700023  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:56:13.719444  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:56:13.736929  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 06:56:13.754184  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:56:13.772309  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 06:56:13.789835  288031 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:56:13.801999  288031 ssh_runner.go:195] Run: openssl version
	I1210 06:56:13.808616  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.815940  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 06:56:13.823193  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.826846  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.826907  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.867540  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:56:13.875137  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:56:13.882628  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.890295  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:56:13.898236  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.902139  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.902206  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.945638  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:56:13.954270  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:56:13.962740  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.971630  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 06:56:13.979227  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.983241  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.983361  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 06:56:14.024714  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:56:14.032691  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0
	I1210 06:56:14.040565  288031 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:56:14.044474  288031 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:56:14.044584  288031 kubeadm.go:401] StartCluster: {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:56:14.044664  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:56:14.044727  288031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:56:14.070428  288031 cri.go:89] found id: ""
	I1210 06:56:14.070496  288031 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:56:14.078638  288031 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:56:14.086602  288031 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:56:14.086714  288031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:56:14.094816  288031 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:56:14.094840  288031 kubeadm.go:158] found existing configuration files:
	
	I1210 06:56:14.094921  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:56:14.102760  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:56:14.102835  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:56:14.110132  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:56:14.117992  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:56:14.118105  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:56:14.125816  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:56:14.133574  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:56:14.133680  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:56:14.141074  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:56:14.148896  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:56:14.148967  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:56:14.156718  288031 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:56:14.194063  288031 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:56:14.194238  288031 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:56:14.263671  288031 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:56:14.263788  288031 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:56:14.263850  288031 kubeadm.go:319] OS: Linux
	I1210 06:56:14.263931  288031 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:56:14.264002  288031 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:56:14.264081  288031 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:56:14.264151  288031 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:56:14.264228  288031 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:56:14.264299  288031 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:56:14.264372  288031 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:56:14.264442  288031 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:56:14.264516  288031 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:56:14.342503  288031 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:56:14.342615  288031 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:56:14.342711  288031 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:56:14.355434  288031 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:56:14.365012  288031 out.go:252]   - Generating certificates and keys ...
	I1210 06:56:14.365181  288031 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:56:14.365286  288031 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:56:14.676353  288031 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:56:14.776617  288031 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:56:14.831643  288031 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:56:15.344970  288031 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:56:15.738235  288031 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:56:15.738572  288031 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:56:15.867481  288031 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:56:15.867849  288031 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:56:16.524781  288031 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:56:16.857089  288031 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:56:17.277023  288031 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:56:17.277264  288031 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:56:17.403345  288031 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:56:17.551288  288031 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:56:17.791106  288031 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:56:17.963150  288031 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:56:18.214947  288031 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:56:18.216045  288031 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:56:18.219851  288031 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:56:18.238517  288031 out.go:252]   - Booting up control plane ...
	I1210 06:56:18.238649  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:56:18.238733  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:56:18.238803  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:56:18.250848  288031 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:56:18.250999  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:56:18.258800  288031 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:56:18.259935  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:56:18.260158  288031 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:56:18.423681  288031 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:56:18.423807  288031 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:58:34.995507  266079 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000116079s
	I1210 06:58:34.995538  266079 kubeadm.go:319] 
	I1210 06:58:34.995597  266079 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:58:34.995631  266079 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:58:34.995735  266079 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:58:34.995740  266079 kubeadm.go:319] 
	I1210 06:58:34.995845  266079 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:58:34.995886  266079 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:58:34.995923  266079 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:58:34.995928  266079 kubeadm.go:319] 
	I1210 06:58:35.000052  266079 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:58:35.000496  266079 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:58:35.000614  266079 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:58:35.000866  266079 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:58:35.000872  266079 kubeadm.go:319] 
	I1210 06:58:35.000939  266079 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:58:35.001867  266079 kubeadm.go:403] duration metric: took 8m5.625012416s to StartCluster
	I1210 06:58:35.001964  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:58:35.002061  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:58:35.029739  266079 cri.go:89] found id: ""
	I1210 06:58:35.029800  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.029809  266079 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:58:35.029823  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:58:35.029903  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:58:35.059137  266079 cri.go:89] found id: ""
	I1210 06:58:35.059162  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.059171  266079 logs.go:284] No container was found matching "etcd"
	I1210 06:58:35.059177  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:58:35.059235  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:58:35.084571  266079 cri.go:89] found id: ""
	I1210 06:58:35.084597  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.084606  266079 logs.go:284] No container was found matching "coredns"
	I1210 06:58:35.084613  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:58:35.084678  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:58:35.113733  266079 cri.go:89] found id: ""
	I1210 06:58:35.113756  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.113765  266079 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:58:35.113772  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:58:35.113830  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:58:35.138121  266079 cri.go:89] found id: ""
	I1210 06:58:35.138147  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.138156  266079 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:58:35.138162  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:58:35.138219  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:58:35.164400  266079 cri.go:89] found id: ""
	I1210 06:58:35.164423  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.164432  266079 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:58:35.164438  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:58:35.164496  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:58:35.188393  266079 cri.go:89] found id: ""
	I1210 06:58:35.188416  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.188424  266079 logs.go:284] No container was found matching "kindnet"
	I1210 06:58:35.188434  266079 logs.go:123] Gathering logs for containerd ...
	I1210 06:58:35.188445  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:58:35.229460  266079 logs.go:123] Gathering logs for container status ...
	I1210 06:58:35.229497  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:58:35.258104  266079 logs.go:123] Gathering logs for kubelet ...
	I1210 06:58:35.258133  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:58:35.314798  266079 logs.go:123] Gathering logs for dmesg ...
	I1210 06:58:35.314833  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:58:35.327838  266079 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:58:35.327863  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:58:35.388749  266079 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:58:35.379754    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.380272    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.381844    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.382309    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.383666    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:58:35.379754    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.380272    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.381844    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.382309    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.383666    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 06:58:35.388774  266079 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:58:35.388804  266079 out.go:285] * 
	W1210 06:58:35.388856  266079 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:58:35.388874  266079 out.go:285] * 
	W1210 06:58:35.390983  266079 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:58:35.395719  266079 out.go:203] 
	W1210 06:58:35.397686  266079 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:58:35.397726  266079 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:58:35.397746  266079 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:58:35.401447  266079 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:50:21 no-preload-320236 containerd[758]: time="2025-12-10T06:50:21.196813280Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.210073933Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.212364720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.226922228Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.227913310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.535290347Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.537474679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.544644107Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.545322891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.456656579Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.458899750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.466570582Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.467486192Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.601587990Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.603772633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.613560498Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.614339090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.601365588Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.603910785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.611697236Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.612195825Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.983871691Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.986420408Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.993743905Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.994155757Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:58:39.497225    5805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:39.498010    5805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:39.501523    5805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:39.501887    5805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:39.503147    5805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	
	
	==> kernel <==
	 06:58:39 up  1:41,  0 user,  load average: 1.00, 1.53, 1.98
	Linux no-preload-320236 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:58:36 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:37 no-preload-320236 kubelet[5567]: E1210 06:58:37.243612    5567 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:37 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:37 no-preload-320236 kubelet[5662]: E1210 06:58:37.992109    5662 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:37 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:38 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 10 06:58:38 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:38 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:38 no-preload-320236 kubelet[5699]: E1210 06:58:38.753332    5699 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:38 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:38 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:58:39 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 326.
	Dec 10 06:58:39 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:39 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:58:39 no-preload-320236 kubelet[5798]: E1210 06:58:39.520625    5798 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:58:39 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:58:39 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236: exit status 6 (333.739815ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:58:39.946204  293550 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-320236" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-320236" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (2.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (110.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-320236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1210 06:58:40.888198    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:08.593342    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:19.663212    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:19.669700    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:19.681235    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:19.702773    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:19.744282    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:19.825801    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:19.987324    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:20.309031    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:20.951244    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:22.232584    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:24.794928    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:29.916756    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:40.158586    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:47.648100    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:00:00.645852    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-320236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m48.575337224s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-320236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-320236 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-320236 describe deploy/metrics-server -n kube-system: exit status 1 (54.563788ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-320236" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-320236 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-320236
helpers_test.go:244: (dbg) docker inspect no-preload-320236:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	        "Created": "2025-12-10T06:50:11.529745127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 266409,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:50:11.59482855Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hostname",
	        "HostsPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hosts",
	        "LogPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df-json.log",
	        "Name": "/no-preload-320236",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-320236:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-320236",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	                "LowerDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-320236",
	                "Source": "/var/lib/docker/volumes/no-preload-320236/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-320236",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-320236",
	                "name.minikube.sigs.k8s.io": "no-preload-320236",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85ae7e8702e41f92b33b5a42b651a54aa9c0e327b78652a75f1a51d370271f8b",
	            "SandboxKey": "/var/run/docker/netns/85ae7e8702e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-320236": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:07:05:69:57:da",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ffc647756423b4e81a1338ec5e1d5f1765ed1034ce5ae186c5fbcc84bf8cb09",
	                    "EndpointID": "d093b0e10fa0218a37c48573bc31f25266756d6a2b6d0253a5c740e71d806388",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-320236",
	                        "afeff1ab0e38"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236: exit status 6 (340.702939ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:00:28.938577  295497 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-320236" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-320236 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ start   │ -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-712093                                                                                                                                                                                                                             │ kubernetes-upgrade-712093    │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:50 UTC │
	│ start   │ -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:50 UTC │ 10 Dec 25 06:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-451123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:51 UTC │ 10 Dec 25 06:52 UTC │
	│ stop    │ -p embed-certs-451123 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-451123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ start   │ -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:53 UTC │
	│ image   │ embed-certs-451123 image list --format=json                                                                                                                                                                                                              │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ pause   │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ unpause │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p disable-driver-mounts-595993                                                                                                                                                                                                                          │ disable-driver-mounts-595993 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ stop    │ -p default-k8s-diff-port-395269 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-395269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:55 UTC │
	│ image   │ default-k8s-diff-port-395269 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ pause   │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ unpause │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-320236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:55:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:55:54.981794  288031 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:55:54.981926  288031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:54.981937  288031 out.go:374] Setting ErrFile to fd 2...
	I1210 06:55:54.981942  288031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:54.982225  288031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:55:54.982645  288031 out.go:368] Setting JSON to false
	I1210 06:55:54.983532  288031 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5905,"bootTime":1765343850,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:55:54.983604  288031 start.go:143] virtualization:  
	I1210 06:55:54.987589  288031 out.go:179] * [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:55:54.990952  288031 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:55:54.991143  288031 notify.go:221] Checking for updates...
	I1210 06:55:54.999718  288031 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:55:55.004245  288031 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:55:55.007947  288031 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:55:55.011263  288031 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:55:55.014567  288031 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:55:55.018346  288031 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:55:55.018474  288031 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:55:55.050040  288031 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:55:55.050159  288031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:55:55.110692  288031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:55:55.101413341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:55:55.110829  288031 docker.go:319] overlay module found
	I1210 06:55:55.114039  288031 out.go:179] * Using the docker driver based on user configuration
	I1210 06:55:55.116970  288031 start.go:309] selected driver: docker
	I1210 06:55:55.116990  288031 start.go:927] validating driver "docker" against <nil>
	I1210 06:55:55.117003  288031 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:55:55.117774  288031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:55:55.187658  288031 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:55:55.175913019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:55:55.187828  288031 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 06:55:55.187862  288031 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 06:55:55.188080  288031 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:55:55.191065  288031 out.go:179] * Using Docker driver with root privileges
	I1210 06:55:55.193975  288031 cni.go:84] Creating CNI manager for ""
	I1210 06:55:55.194040  288031 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:55:55.194060  288031 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:55:55.194137  288031 start.go:353] cluster config:
	{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:55:55.197188  288031 out.go:179] * Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	I1210 06:55:55.199998  288031 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 06:55:55.202945  288031 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:55:55.205774  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:55:55.205946  288031 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:55:55.228535  288031 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:55:55.228555  288031 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 06:55:55.253626  288031 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 06:55:55.392999  288031 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 06:55:55.393221  288031 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 06:55:55.393258  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json: {Name:mke358d8c3878b6ccc086ae75b08bfbb6079278d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:55:55.393289  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.393417  288031 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:55:55.393461  288031 start.go:360] acquireMachinesLock for newest-cni-168808: {Name:mk3e9e7ddd89f37944cb01c45725514e76e5ba82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.393543  288031 start.go:364] duration metric: took 46.523µs to acquireMachinesLock for "newest-cni-168808"
	I1210 06:55:55.393571  288031 start.go:93] Provisioning new machine with config: &{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 06:55:55.393679  288031 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:55:55.397127  288031 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:55:55.397358  288031 start.go:159] libmachine.API.Create for "newest-cni-168808" (driver="docker")
	I1210 06:55:55.397385  288031 client.go:173] LocalClient.Create starting
	I1210 06:55:55.397438  288031 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem
	I1210 06:55:55.397479  288031 main.go:143] libmachine: Decoding PEM data...
	I1210 06:55:55.397497  288031 main.go:143] libmachine: Parsing certificate...
	I1210 06:55:55.397545  288031 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem
	I1210 06:55:55.397561  288031 main.go:143] libmachine: Decoding PEM data...
	I1210 06:55:55.397572  288031 main.go:143] libmachine: Parsing certificate...
	I1210 06:55:55.397949  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:55:55.421587  288031 cli_runner.go:211] docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:55:55.421662  288031 network_create.go:284] running [docker network inspect newest-cni-168808] to gather additional debugging logs...
	I1210 06:55:55.421680  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808
	W1210 06:55:55.440445  288031 cli_runner.go:211] docker network inspect newest-cni-168808 returned with exit code 1
	I1210 06:55:55.440476  288031 network_create.go:287] error running [docker network inspect newest-cni-168808]: docker network inspect newest-cni-168808: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-168808 not found
	I1210 06:55:55.440491  288031 network_create.go:289] output of [docker network inspect newest-cni-168808]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-168808 not found
	
	** /stderr **
	I1210 06:55:55.440592  288031 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:55:55.472278  288031 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d091f932c27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:e7:11:3f:d3:8a} reservation:<nil>}
	I1210 06:55:55.472550  288031 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6dea00dc193 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:ca:73:41:95:cc} reservation:<nil>}
	I1210 06:55:55.472849  288031 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ab13536e79d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:47:e5:9e:c6:4a} reservation:<nil>}
	I1210 06:55:55.473245  288031 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198fe00}
	I1210 06:55:55.473272  288031 network_create.go:124] attempt to create docker network newest-cni-168808 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 06:55:55.473327  288031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-168808 newest-cni-168808
	I1210 06:55:55.535150  288031 network_create.go:108] docker network newest-cni-168808 192.168.76.0/24 created
	I1210 06:55:55.535181  288031 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-168808" container
	I1210 06:55:55.535292  288031 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:55:55.551392  288031 cli_runner.go:164] Run: docker volume create newest-cni-168808 --label name.minikube.sigs.k8s.io=newest-cni-168808 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:55:55.554117  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.578140  288031 oci.go:103] Successfully created a docker volume newest-cni-168808
	I1210 06:55:55.578234  288031 cli_runner.go:164] Run: docker run --rm --name newest-cni-168808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-168808 --entrypoint /usr/bin/test -v newest-cni-168808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 06:55:55.718018  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:55:55.932804  288031 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.932932  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 06:55:55.932947  288031 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 167.936µs
	I1210 06:55:55.932957  288031 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 06:55:55.932978  288031 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933015  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 06:55:55.933025  288031 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 53.498µs
	I1210 06:55:55.933032  288031 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933044  288031 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933075  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 06:55:55.933085  288031 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 42.708µs
	I1210 06:55:55.933092  288031 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933106  288031 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933143  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 06:55:55.933152  288031 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 47.762µs
	I1210 06:55:55.933164  288031 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933176  288031 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933206  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 06:55:55.933216  288031 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 41.01µs
	I1210 06:55:55.933228  288031 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 06:55:55.933236  288031 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933268  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 06:55:55.933277  288031 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.945µs
	I1210 06:55:55.933283  288031 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 06:55:55.933292  288031 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933320  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 06:55:55.933328  288031 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 37.703µs
	I1210 06:55:55.933334  288031 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 06:55:55.933343  288031 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:55:55.933369  288031 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 06:55:55.933381  288031 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 39.287µs
	I1210 06:55:55.933387  288031 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 06:55:55.933393  288031 cache.go:87] Successfully saved all images to host disk.
	I1210 06:55:56.133246  288031 oci.go:107] Successfully prepared a docker volume newest-cni-168808
	I1210 06:55:56.133310  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	W1210 06:55:56.133458  288031 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 06:55:56.133555  288031 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:55:56.190219  288031 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-168808 --name newest-cni-168808 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-168808 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-168808 --network newest-cni-168808 --ip 192.168.76.2 --volume newest-cni-168808:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 06:55:56.510233  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Running}}
	I1210 06:55:56.532276  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:56.559120  288031 cli_runner.go:164] Run: docker exec newest-cni-168808 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:55:56.616474  288031 oci.go:144] the created container "newest-cni-168808" has a running status.
	I1210 06:55:56.616510  288031 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa...
	I1210 06:55:56.920989  288031 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:55:56.944042  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:56.969366  288031 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:55:56.969535  288031 kic_runner.go:114] Args: [docker exec --privileged newest-cni-168808 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:55:57.033434  288031 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 06:55:57.058007  288031 machine.go:94] provisionDockerMachine start ...
	I1210 06:55:57.058103  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:55:57.089237  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:55:57.089566  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:55:57.089575  288031 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:55:57.090220  288031 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58770->127.0.0.1:33093: read: connection reset by peer
	I1210 06:56:00.364112  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 06:56:00.364135  288031 ubuntu.go:182] provisioning hostname "newest-cni-168808"
	I1210 06:56:00.364212  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.456773  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:56:00.457119  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:56:00.457133  288031 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-168808 && echo "newest-cni-168808" | sudo tee /etc/hostname
	I1210 06:56:00.645316  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 06:56:00.645407  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.664033  288031 main.go:143] libmachine: Using SSH client type: native
	I1210 06:56:00.664382  288031 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1210 06:56:00.664404  288031 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-168808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-168808/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-168808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:56:00.815306  288031 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:56:00.815331  288031 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 06:56:00.815364  288031 ubuntu.go:190] setting up certificates
	I1210 06:56:00.815372  288031 provision.go:84] configureAuth start
	I1210 06:56:00.815439  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:00.832798  288031 provision.go:143] copyHostCerts
	I1210 06:56:00.832883  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 06:56:00.832898  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 06:56:00.832975  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 06:56:00.833075  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 06:56:00.833087  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 06:56:00.833119  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 06:56:00.833186  288031 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 06:56:00.833196  288031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 06:56:00.833222  288031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 06:56:00.833276  288031 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-168808 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-168808]
	I1210 06:56:00.918781  288031 provision.go:177] copyRemoteCerts
	I1210 06:56:00.919089  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:56:00.919173  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:00.937214  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.043240  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:56:01.061326  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:56:01.079140  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:56:01.096712  288031 provision.go:87] duration metric: took 281.317584ms to configureAuth
	I1210 06:56:01.096743  288031 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:56:01.096994  288031 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:56:01.097006  288031 machine.go:97] duration metric: took 4.038973217s to provisionDockerMachine
	I1210 06:56:01.097025  288031 client.go:176] duration metric: took 5.699623594s to LocalClient.Create
	I1210 06:56:01.097050  288031 start.go:167] duration metric: took 5.699693115s to libmachine.API.Create "newest-cni-168808"
	I1210 06:56:01.097057  288031 start.go:293] postStartSetup for "newest-cni-168808" (driver="docker")
	I1210 06:56:01.097073  288031 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:56:01.097147  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:56:01.097204  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.117411  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.225094  288031 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:56:01.228823  288031 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:56:01.228858  288031 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:56:01.228870  288031 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 06:56:01.228945  288031 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 06:56:01.229044  288031 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 06:56:01.229154  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:56:01.237207  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:56:01.255822  288031 start.go:296] duration metric: took 158.728391ms for postStartSetup
	I1210 06:56:01.256262  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:01.275219  288031 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 06:56:01.275529  288031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:56:01.275586  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.293397  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.396136  288031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:56:01.401043  288031 start.go:128] duration metric: took 6.00734179s to createHost
	I1210 06:56:01.401068  288031 start.go:83] releasing machines lock for "newest-cni-168808", held for 6.007509906s
	I1210 06:56:01.401140  288031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 06:56:01.417888  288031 ssh_runner.go:195] Run: cat /version.json
	I1210 06:56:01.417948  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.418253  288031 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:56:01.418318  288031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 06:56:01.442401  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.449051  288031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 06:56:01.632926  288031 ssh_runner.go:195] Run: systemctl --version
	I1210 06:56:01.640549  288031 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:56:01.645141  288031 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:56:01.645218  288031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:56:01.673901  288031 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 06:56:01.673935  288031 start.go:496] detecting cgroup driver to use...
	I1210 06:56:01.673969  288031 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:56:01.674032  288031 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 06:56:01.689298  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:56:01.702121  288031 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:56:01.702192  288031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:56:01.720186  288031 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:56:01.738710  288031 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:56:01.852215  288031 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:56:01.989095  288031 docker.go:234] disabling docker service ...
	I1210 06:56:01.989232  288031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:56:02.016451  288031 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:56:02.030687  288031 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:56:02.153586  288031 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:56:02.280278  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:56:02.293652  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:56:02.308576  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:02.458303  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:56:02.467239  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:56:02.475789  288031 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:56:02.475860  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 06:56:02.484995  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:56:02.493944  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:56:02.503478  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:56:02.512024  288031 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:56:02.520354  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:56:02.529401  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:56:02.538409  288031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:56:02.548300  288031 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:56:02.556042  288031 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:56:02.563716  288031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:56:02.677702  288031 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:56:02.766228  288031 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 06:56:02.766303  288031 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 06:56:02.770737  288031 start.go:564] Will wait 60s for crictl version
	I1210 06:56:02.770834  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:02.775190  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:56:02.800314  288031 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 06:56:02.800416  288031 ssh_runner.go:195] Run: containerd --version
	I1210 06:56:02.821570  288031 ssh_runner.go:195] Run: containerd --version
	I1210 06:56:02.847675  288031 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 06:56:02.850751  288031 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:56:02.867882  288031 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:56:02.871991  288031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:56:02.885356  288031 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:56:02.888273  288031 kubeadm.go:884] updating cluster {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:56:02.888501  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.049684  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.199179  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:03.344408  288031 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 06:56:03.344500  288031 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:56:03.372099  288031 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1210 06:56:03.372123  288031 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 06:56:03.372188  288031 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:03.372216  288031 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.372401  288031 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.372426  288031 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.372484  288031 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.372525  288031 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.372561  288031 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.372197  288031 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.374594  288031 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.374594  288031 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.374671  288031 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.374725  288031 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.374874  288031 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.374973  288031 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:03.374986  288031 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.375071  288031 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.727178  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1210 06:56:03.727250  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.731066  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1210 06:56:03.731131  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.735451  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1210 06:56:03.735512  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.736230  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1210 06:56:03.736288  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.743134  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 06:56:03.743203  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 06:56:03.749746  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1210 06:56:03.749821  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.753657  288031 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1210 06:56:03.753695  288031 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.753742  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.773282  288031 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1210 06:56:03.773355  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.790557  288031 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1210 06:56:03.790597  288031 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.790644  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.790733  288031 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1210 06:56:03.790752  288031 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.790779  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.799555  288031 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1210 06:56:03.799644  288031 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.799725  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.806996  288031 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 06:56:03.807106  288031 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 06:56:03.807186  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.814114  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.814221  288031 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1210 06:56:03.814280  288031 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.814358  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.826776  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.826945  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.827124  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.827225  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:03.827327  288031 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1210 06:56:03.827372  288031 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.827436  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:03.903162  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:03.903368  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:03.906563  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:03.906718  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:03.906821  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:03.906908  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:03.907050  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:04.003323  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:04.003515  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 06:56:04.011136  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 06:56:04.011298  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:04.011413  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 06:56:04.011544  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 06:56:04.011642  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1210 06:56:04.089211  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 06:56:04.089350  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1210 06:56:04.089480  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:04.134911  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1210 06:56:04.135033  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:04.135102  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 06:56:04.135154  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 06:56:04.135223  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 06:56:04.135271  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1210 06:56:04.135322  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:04.135372  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:04.135418  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:04.155745  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.155780  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1210 06:56:04.155836  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:04.155928  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:04.221987  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.222077  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1210 06:56:04.222179  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1210 06:56:04.222222  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1210 06:56:04.222311  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 06:56:04.222345  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 06:56:04.222453  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.222565  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.222646  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 06:56:04.222687  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1210 06:56:04.222775  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.222808  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1210 06:56:04.300685  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 06:56:04.300730  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1210 06:56:04.320496  288031 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 06:56:04.321128  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1210 06:56:04.472464  288031 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 06:56:04.472630  288031 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 06:56:04.472710  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.604775  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 06:56:04.616616  288031 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 06:56:04.616662  288031 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.616713  288031 ssh_runner.go:195] Run: which crictl
	I1210 06:56:04.705496  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:04.795703  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.795789  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 06:56:04.834471  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:06.074424  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.27860408s)
	I1210 06:56:06.074538  288031 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.240038061s)
	I1210 06:56:06.074651  288031 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:56:06.074744  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1210 06:56:06.074784  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:06.074841  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 06:56:06.117004  288031 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 06:56:06.117113  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:07.020903  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 06:56:07.020935  288031 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:07.020987  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1210 06:56:07.021057  288031 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 06:56:07.021071  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 06:56:08.105154  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.084144622s)
	I1210 06:56:08.105190  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1210 06:56:08.105213  288031 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:08.105277  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1210 06:56:09.435879  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.330576141s)
	I1210 06:56:09.435909  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1210 06:56:09.435927  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:09.435980  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 06:56:10.441205  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.005199332s)
	I1210 06:56:10.441234  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1210 06:56:10.441253  288031 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:10.441308  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 06:56:11.471539  288031 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.03020309s)
	I1210 06:56:11.471569  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1210 06:56:11.471585  288031 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:11.471630  288031 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 06:56:11.808584  288031 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:56:11.808617  288031 cache_images.go:125] Successfully loaded all cached images
	I1210 06:56:11.808624  288031 cache_images.go:94] duration metric: took 8.436487473s to LoadCachedImages
	I1210 06:56:11.808636  288031 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 06:56:11.808725  288031 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-168808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:56:11.808792  288031 ssh_runner.go:195] Run: sudo crictl info
	I1210 06:56:11.836989  288031 cni.go:84] Creating CNI manager for ""
	I1210 06:56:11.837009  288031 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 06:56:11.837023  288031 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:56:11.837046  288031 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-168808 NodeName:newest-cni-168808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:56:11.837170  288031 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-168808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:56:11.837238  288031 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:56:11.845539  288031 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 06:56:11.845605  288031 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:56:11.853470  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1210 06:56:11.853499  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256
	I1210 06:56:11.853544  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:56:11.853564  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 06:56:11.853477  288031 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 06:56:11.853636  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 06:56:11.870493  288031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 06:56:11.870518  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 06:56:11.870493  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 06:56:11.870541  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1210 06:56:11.870547  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1210 06:56:11.892072  288031 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 06:56:11.892110  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1210 06:56:12.684721  288031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:56:12.692932  288031 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 06:56:12.706015  288031 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:56:12.719741  288031 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1210 06:56:12.733262  288031 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:56:12.737005  288031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:56:12.746629  288031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:56:12.858808  288031 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:56:12.875513  288031 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808 for IP: 192.168.76.2
	I1210 06:56:12.875541  288031 certs.go:195] generating shared ca certs ...
	I1210 06:56:12.875592  288031 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:12.875802  288031 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 06:56:12.875887  288031 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 06:56:12.875902  288031 certs.go:257] generating profile certs ...
	I1210 06:56:12.875985  288031 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key
	I1210 06:56:12.876002  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt with IP's: []
	I1210 06:56:13.076032  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt ...
	I1210 06:56:13.076068  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.crt: {Name:mkf7bb14938883b10d68a49b8ce34d3c2146efc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.076259  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key ...
	I1210 06:56:13.076271  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key: {Name:mk990176085bdcef2cd12b2c8873345669259230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.076363  288031 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb
	I1210 06:56:13.076378  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 06:56:13.460966  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb ...
	I1210 06:56:13.461005  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb: {Name:mk5f1859a12684f1b2417133b2abe5b0cc7114b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.461185  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb ...
	I1210 06:56:13.461201  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb: {Name:mk2fe3162e58fbb8aab1f63fc8fe494c68c7632e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.461286  288031 certs.go:382] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt.38f3d6eb -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt
	I1210 06:56:13.461362  288031 certs.go:386] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key
	I1210 06:56:13.461420  288031 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key
	I1210 06:56:13.461442  288031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt with IP's: []
	I1210 06:56:13.583028  288031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt ...
	I1210 06:56:13.583055  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt: {Name:mk85677ff817d69f49f025f68ba6ab54589ffc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.583231  288031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key ...
	I1210 06:56:13.583244  288031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key: {Name:mke6a5c0bf07d17ef15ab36a3c463f1af3ef2e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:56:13.583429  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 06:56:13.583478  288031 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 06:56:13.583491  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 06:56:13.583519  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:56:13.583547  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:56:13.583575  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 06:56:13.583632  288031 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 06:56:13.584222  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:56:13.602582  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:56:13.622006  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:56:13.639862  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 06:56:13.658651  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:56:13.680241  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:56:13.700023  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:56:13.719444  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:56:13.736929  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 06:56:13.754184  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:56:13.772309  288031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 06:56:13.789835  288031 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:56:13.801999  288031 ssh_runner.go:195] Run: openssl version
	I1210 06:56:13.808616  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.815940  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 06:56:13.823193  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.826846  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.826907  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 06:56:13.867540  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:56:13.875137  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:56:13.882628  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.890295  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:56:13.898236  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.902139  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.902206  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:56:13.945638  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:56:13.954270  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:56:13.962740  288031 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.971630  288031 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 06:56:13.979227  288031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.983241  288031 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 06:56:13.983361  288031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 06:56:14.024714  288031 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:56:14.032691  288031 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0
	I1210 06:56:14.040565  288031 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:56:14.044474  288031 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:56:14.044584  288031 kubeadm.go:401] StartCluster: {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:56:14.044664  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 06:56:14.044727  288031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:56:14.070428  288031 cri.go:89] found id: ""
	I1210 06:56:14.070496  288031 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:56:14.078638  288031 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:56:14.086602  288031 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:56:14.086714  288031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:56:14.094816  288031 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:56:14.094840  288031 kubeadm.go:158] found existing configuration files:
	
	I1210 06:56:14.094921  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:56:14.102760  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:56:14.102835  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:56:14.110132  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:56:14.117992  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:56:14.118105  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:56:14.125816  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:56:14.133574  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:56:14.133680  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:56:14.141074  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:56:14.148896  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:56:14.148967  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:56:14.156718  288031 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:56:14.194063  288031 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:56:14.194238  288031 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:56:14.263671  288031 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:56:14.263788  288031 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 06:56:14.263850  288031 kubeadm.go:319] OS: Linux
	I1210 06:56:14.263931  288031 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:56:14.264002  288031 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:56:14.264081  288031 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:56:14.264151  288031 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:56:14.264228  288031 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:56:14.264299  288031 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:56:14.264372  288031 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:56:14.264442  288031 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:56:14.264516  288031 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:56:14.342503  288031 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:56:14.342615  288031 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:56:14.342711  288031 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:56:14.355434  288031 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:56:14.365012  288031 out.go:252]   - Generating certificates and keys ...
	I1210 06:56:14.365181  288031 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:56:14.365286  288031 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:56:14.676353  288031 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:56:14.776617  288031 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:56:14.831643  288031 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:56:15.344970  288031 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:56:15.738235  288031 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:56:15.738572  288031 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:56:15.867481  288031 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:56:15.867849  288031 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:56:16.524781  288031 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:56:16.857089  288031 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:56:17.277023  288031 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:56:17.277264  288031 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:56:17.403345  288031 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:56:17.551288  288031 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:56:17.791106  288031 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:56:17.963150  288031 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:56:18.214947  288031 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:56:18.216045  288031 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:56:18.219851  288031 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:56:18.238517  288031 out.go:252]   - Booting up control plane ...
	I1210 06:56:18.238649  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:56:18.238733  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:56:18.238803  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:56:18.250848  288031 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:56:18.250999  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:56:18.258800  288031 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:56:18.259935  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:56:18.260158  288031 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:56:18.423681  288031 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:56:18.423807  288031 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:58:34.995507  266079 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000116079s
	I1210 06:58:34.995538  266079 kubeadm.go:319] 
	I1210 06:58:34.995597  266079 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:58:34.995631  266079 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:58:34.995735  266079 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:58:34.995740  266079 kubeadm.go:319] 
	I1210 06:58:34.995845  266079 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:58:34.995886  266079 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:58:34.995923  266079 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:58:34.995928  266079 kubeadm.go:319] 
	I1210 06:58:35.000052  266079 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 06:58:35.000496  266079 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:58:35.000614  266079 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:58:35.000866  266079 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 06:58:35.000872  266079 kubeadm.go:319] 
	I1210 06:58:35.000939  266079 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:58:35.001867  266079 kubeadm.go:403] duration metric: took 8m5.625012416s to StartCluster
	I1210 06:58:35.001964  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:58:35.002061  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:58:35.029739  266079 cri.go:89] found id: ""
	I1210 06:58:35.029800  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.029809  266079 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:58:35.029823  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 06:58:35.029903  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:58:35.059137  266079 cri.go:89] found id: ""
	I1210 06:58:35.059162  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.059171  266079 logs.go:284] No container was found matching "etcd"
	I1210 06:58:35.059177  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 06:58:35.059235  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:58:35.084571  266079 cri.go:89] found id: ""
	I1210 06:58:35.084597  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.084606  266079 logs.go:284] No container was found matching "coredns"
	I1210 06:58:35.084613  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:58:35.084678  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:58:35.113733  266079 cri.go:89] found id: ""
	I1210 06:58:35.113756  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.113765  266079 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:58:35.113772  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:58:35.113830  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:58:35.138121  266079 cri.go:89] found id: ""
	I1210 06:58:35.138147  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.138156  266079 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:58:35.138162  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:58:35.138219  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:58:35.164400  266079 cri.go:89] found id: ""
	I1210 06:58:35.164423  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.164432  266079 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:58:35.164438  266079 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 06:58:35.164496  266079 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:58:35.188393  266079 cri.go:89] found id: ""
	I1210 06:58:35.188416  266079 logs.go:282] 0 containers: []
	W1210 06:58:35.188424  266079 logs.go:284] No container was found matching "kindnet"
	I1210 06:58:35.188434  266079 logs.go:123] Gathering logs for containerd ...
	I1210 06:58:35.188445  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 06:58:35.229460  266079 logs.go:123] Gathering logs for container status ...
	I1210 06:58:35.229497  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:58:35.258104  266079 logs.go:123] Gathering logs for kubelet ...
	I1210 06:58:35.258133  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:58:35.314798  266079 logs.go:123] Gathering logs for dmesg ...
	I1210 06:58:35.314833  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:58:35.327838  266079 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:58:35.327863  266079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:58:35.388749  266079 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:58:35.379754    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.380272    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.381844    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.382309    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.383666    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:58:35.379754    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.380272    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.381844    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.382309    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 06:58:35.383666    5432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 06:58:35.388774  266079 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:58:35.388804  266079 out.go:285] * 
	W1210 06:58:35.388856  266079 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:58:35.388874  266079 out.go:285] * 
	W1210 06:58:35.390983  266079 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:58:35.395719  266079 out.go:203] 
	W1210 06:58:35.397686  266079 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000116079s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:58:35.397726  266079 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:58:35.397746  266079 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:58:35.401447  266079 out.go:203] 
	I1210 07:00:18.423768  288031 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000380171s
	I1210 07:00:18.423796  288031 kubeadm.go:319] 
	I1210 07:00:18.424248  288031 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:00:18.424332  288031 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:00:18.424690  288031 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:00:18.424700  288031 kubeadm.go:319] 
	I1210 07:00:18.424910  288031 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:00:18.424973  288031 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:00:18.425276  288031 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:00:18.425286  288031 kubeadm.go:319] 
	I1210 07:00:18.430059  288031 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:00:18.430830  288031 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:00:18.430957  288031 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:00:18.431231  288031 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:00:18.431244  288031 kubeadm.go:319] 
	I1210 07:00:18.431500  288031 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:00:18.431504  288031 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-168808] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000380171s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:00:18.431582  288031 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1210 07:00:18.843096  288031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:00:18.856261  288031 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:00:18.856329  288031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:00:18.864319  288031 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:00:18.864336  288031 kubeadm.go:158] found existing configuration files:
	
	I1210 07:00:18.864386  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:00:18.872311  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:00:18.872378  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:00:18.880473  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:00:18.888809  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:00:18.888898  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:00:18.896694  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:00:18.904593  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:00:18.904713  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:00:18.912542  288031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:00:18.920717  288031 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:00:18.920789  288031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:00:18.928124  288031 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:00:18.967512  288031 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:00:18.967907  288031 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:00:19.041388  288031 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:00:19.041560  288031 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:00:19.041615  288031 kubeadm.go:319] OS: Linux
	I1210 07:00:19.041688  288031 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:00:19.041765  288031 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:00:19.041839  288031 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:00:19.041914  288031 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:00:19.041993  288031 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:00:19.042098  288031 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:00:19.042164  288031 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:00:19.042294  288031 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:00:19.042373  288031 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:00:19.108959  288031 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:00:19.109206  288031 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:00:19.109320  288031 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:00:19.119464  288031 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:00:19.124812  288031 out.go:252]   - Generating certificates and keys ...
	I1210 07:00:19.125035  288031 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:00:19.125167  288031 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:00:19.125319  288031 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:00:19.125475  288031 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:00:19.125720  288031 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:00:19.125904  288031 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:00:19.126029  288031 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:00:19.126109  288031 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:00:19.126199  288031 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:00:19.126302  288031 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:00:19.126351  288031 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:00:19.126419  288031 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:00:19.602744  288031 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:00:19.748510  288031 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:00:19.958702  288031 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:00:20.047566  288031 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:00:20.269067  288031 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:00:20.269683  288031 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:00:20.272343  288031 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:00:20.275537  288031 out.go:252]   - Booting up control plane ...
	I1210 07:00:20.275663  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:00:20.275769  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:00:20.275866  288031 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:00:20.294928  288031 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:00:20.295378  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:00:20.304384  288031 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:00:20.304493  288031 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:00:20.305348  288031 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:00:20.437669  288031 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:00:20.437797  288031 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:50:21 no-preload-320236 containerd[758]: time="2025-12-10T06:50:21.196813280Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.210073933Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.212364720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.226922228Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:22 no-preload-320236 containerd[758]: time="2025-12-10T06:50:22.227913310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.535290347Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.537474679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.544644107Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:23 no-preload-320236 containerd[758]: time="2025-12-10T06:50:23.545322891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.456656579Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.458899750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.466570582Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:24 no-preload-320236 containerd[758]: time="2025-12-10T06:50:24.467486192Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.601587990Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.603772633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.613560498Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:25 no-preload-320236 containerd[758]: time="2025-12-10T06:50:25.614339090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.601365588Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.603910785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.611697236Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.612195825Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.983871691Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.986420408Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.993743905Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:50:26 no-preload-320236 containerd[758]: time="2025-12-10T06:50:26.994155757Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:00:29.581079    6876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:00:29.581770    6876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:00:29.583521    6876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:00:29.584055    6876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:00:29.585695    6876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	
	
	==> kernel <==
	 07:00:29 up  1:42,  0 user,  load average: 0.66, 1.29, 1.85
	Linux no-preload-320236 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:00:26 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:00:26 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 469.
	Dec 10 07:00:26 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:00:26 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:00:26 no-preload-320236 kubelet[6751]: E1210 07:00:26.960267    6751 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:00:26 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:00:26 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:00:27 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 470.
	Dec 10 07:00:27 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:00:27 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:00:27 no-preload-320236 kubelet[6757]: E1210 07:00:27.707217    6757 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:00:27 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:00:27 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:00:28 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 471.
	Dec 10 07:00:28 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:00:28 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:00:28 no-preload-320236 kubelet[6763]: E1210 07:00:28.465889    6763 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:00:28 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:00:28 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:00:29 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 472.
	Dec 10 07:00:29 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:00:29 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:00:29 no-preload-320236 kubelet[6791]: E1210 07:00:29.215837    6791 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:00:29 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:00:29 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236: exit status 6 (332.069004ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:00:30.034730  295723 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-320236" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-320236" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (110.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1210 07:00:41.607959    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:01:21.948290    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:01:38.876774    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:01:44.571692    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:02:03.529304    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:03:37.012477    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:03:40.887870    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:04:19.662382    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 80 (6m8.870211341s)

                                                
                                                
-- stdout --
	* [no-preload-320236] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-320236" primary control-plane node in "no-preload-320236" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:00:31.606607  296020 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:00:31.606726  296020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:00:31.606763  296020 out.go:374] Setting ErrFile to fd 2...
	I1210 07:00:31.606781  296020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:00:31.607068  296020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 07:00:31.607446  296020 out.go:368] Setting JSON to false
	I1210 07:00:31.608351  296020 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6182,"bootTime":1765343850,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:00:31.608452  296020 start.go:143] virtualization:  
	I1210 07:00:31.611858  296020 out.go:179] * [no-preload-320236] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:00:31.616135  296020 notify.go:221] Checking for updates...
	I1210 07:00:31.616625  296020 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:00:31.619795  296020 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:00:31.622704  296020 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:00:31.625649  296020 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 07:00:31.628623  296020 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:00:31.632108  296020 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:00:31.635513  296020 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:00:31.636082  296020 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:00:31.668430  296020 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:00:31.668544  296020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:00:31.757341  296020 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:00:31.748329892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:00:31.757451  296020 docker.go:319] overlay module found
	I1210 07:00:31.760519  296020 out.go:179] * Using the docker driver based on existing profile
	I1210 07:00:31.763315  296020 start.go:309] selected driver: docker
	I1210 07:00:31.763332  296020 start.go:927] validating driver "docker" against &{Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:00:31.763427  296020 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:00:31.764155  296020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:00:31.816369  296020 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:00:31.807572299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:00:31.816697  296020 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:00:31.816729  296020 cni.go:84] Creating CNI manager for ""
	I1210 07:00:31.816780  296020 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:00:31.816827  296020 start.go:353] cluster config:
	{Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:00:31.820155  296020 out.go:179] * Starting "no-preload-320236" primary control-plane node in "no-preload-320236" cluster
	I1210 07:00:31.823065  296020 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:00:31.825850  296020 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:00:31.828615  296020 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:00:31.828709  296020 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:00:31.828754  296020 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json ...
	I1210 07:00:31.829080  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:31.848090  296020 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:00:31.848110  296020 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 07:00:31.848126  296020 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:00:31.848157  296020 start.go:360] acquireMachinesLock for no-preload-320236: {Name:mk4a67a43519a7e8fad4432e15b5aa1fee295390 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:31.848210  296020 start.go:364] duration metric: took 35.34µs to acquireMachinesLock for "no-preload-320236"
	I1210 07:00:31.848227  296020 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:00:31.848233  296020 fix.go:54] fixHost starting: 
	I1210 07:00:31.848495  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:31.871386  296020 fix.go:112] recreateIfNeeded on no-preload-320236: state=Stopped err=<nil>
	W1210 07:00:31.871423  296020 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:00:31.874767  296020 out.go:252] * Restarting existing docker container for "no-preload-320236" ...
	I1210 07:00:31.874868  296020 cli_runner.go:164] Run: docker start no-preload-320236
	I1210 07:00:32.009251  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:32.156909  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:32.181453  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:32.182795  296020 kic.go:430] container "no-preload-320236" state is running.
	I1210 07:00:32.183209  296020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 07:00:32.232417  296020 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json ...
	I1210 07:00:32.232635  296020 machine.go:94] provisionDockerMachine start ...
	I1210 07:00:32.232693  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:32.261256  296020 main.go:143] libmachine: Using SSH client type: native
	I1210 07:00:32.261589  296020 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 07:00:32.261598  296020 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:00:32.262750  296020 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:00:32.410295  296020 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410397  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:00:32.410406  296020 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.804µs
	I1210 07:00:32.410415  296020 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:00:32.410426  296020 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410466  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:00:32.410472  296020 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 47.402µs
	I1210 07:00:32.410478  296020 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410488  296020 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410538  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:00:32.410543  296020 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 57.051µs
	I1210 07:00:32.410550  296020 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410561  296020 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410587  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:00:32.410592  296020 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 32.222µs
	I1210 07:00:32.410597  296020 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410607  296020 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410641  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 07:00:32.410646  296020 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 40.46µs
	I1210 07:00:32.410652  296020 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410666  296020 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410699  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:00:32.410704  296020 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 44.333µs
	I1210 07:00:32.410709  296020 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:00:32.410718  296020 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410744  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 07:00:32.410748  296020 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 31.541µs
	I1210 07:00:32.410754  296020 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 07:00:32.410763  296020 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410800  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:00:32.410805  296020 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 43.233µs
	I1210 07:00:32.410810  296020 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:00:32.410817  296020 cache.go:87] Successfully saved all images to host disk.
	I1210 07:00:35.415945  296020 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-320236
	
	I1210 07:00:35.415969  296020 ubuntu.go:182] provisioning hostname "no-preload-320236"
	I1210 07:00:35.416031  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:35.439002  296020 main.go:143] libmachine: Using SSH client type: native
	I1210 07:00:35.439495  296020 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 07:00:35.439512  296020 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-320236 && echo "no-preload-320236" | sudo tee /etc/hostname
	I1210 07:00:35.600226  296020 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-320236
	
	I1210 07:00:35.600320  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:35.617143  296020 main.go:143] libmachine: Using SSH client type: native
	I1210 07:00:35.617452  296020 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 07:00:35.617472  296020 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320236/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:00:35.771609  296020 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:00:35.771638  296020 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 07:00:35.771682  296020 ubuntu.go:190] setting up certificates
	I1210 07:00:35.771771  296020 provision.go:84] configureAuth start
	I1210 07:00:35.771846  296020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 07:00:35.791167  296020 provision.go:143] copyHostCerts
	I1210 07:00:35.791247  296020 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 07:00:35.791260  296020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 07:00:35.791339  296020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 07:00:35.791446  296020 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 07:00:35.791457  296020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 07:00:35.791485  296020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 07:00:35.791558  296020 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 07:00:35.791566  296020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 07:00:35.791595  296020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 07:00:35.791661  296020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.no-preload-320236 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-320236]
	I1210 07:00:36.056131  296020 provision.go:177] copyRemoteCerts
	I1210 07:00:36.056213  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:00:36.056259  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.074420  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.179259  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:00:36.197688  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:00:36.220673  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:00:36.237968  296020 provision.go:87] duration metric: took 466.169895ms to configureAuth
	I1210 07:00:36.237995  296020 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:00:36.238191  296020 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:00:36.238203  296020 machine.go:97] duration metric: took 4.005560458s to provisionDockerMachine
	I1210 07:00:36.238212  296020 start.go:293] postStartSetup for "no-preload-320236" (driver="docker")
	I1210 07:00:36.238223  296020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:00:36.238275  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:00:36.238329  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.254857  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.358982  296020 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:00:36.362431  296020 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:00:36.362463  296020 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:00:36.362476  296020 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 07:00:36.362532  296020 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 07:00:36.362616  296020 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 07:00:36.362730  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:00:36.370123  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:00:36.387715  296020 start.go:296] duration metric: took 149.487982ms for postStartSetup
	I1210 07:00:36.387809  296020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:00:36.387850  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.404695  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.508174  296020 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:00:36.512870  296020 fix.go:56] duration metric: took 4.664630876s for fixHost
	I1210 07:00:36.512896  296020 start.go:83] releasing machines lock for "no-preload-320236", held for 4.664678434s
	I1210 07:00:36.512987  296020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 07:00:36.529627  296020 ssh_runner.go:195] Run: cat /version.json
	I1210 07:00:36.529680  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.529956  296020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:00:36.530021  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.556696  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.560591  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.658689  296020 ssh_runner.go:195] Run: systemctl --version
	I1210 07:00:36.753674  296020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:00:36.758001  296020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:00:36.758069  296020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:00:36.765538  296020 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:00:36.765576  296020 start.go:496] detecting cgroup driver to use...
	I1210 07:00:36.765607  296020 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:00:36.765653  296020 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:00:36.782605  296020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:00:36.796109  296020 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:00:36.796200  296020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:00:36.811318  296020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:00:36.824166  296020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:00:36.940162  296020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:00:37.067248  296020 docker.go:234] disabling docker service ...
	I1210 07:00:37.067375  296020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:00:37.082860  296020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:00:37.097077  296020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:00:37.210251  296020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:00:37.318500  296020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:00:37.331193  296020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:00:37.346030  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:37.491512  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:00:37.500237  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:00:37.508872  296020 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:00:37.508946  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:00:37.517510  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:00:37.526466  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:00:37.534915  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:00:37.543652  296020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:00:37.551699  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:00:37.560511  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:00:37.569071  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:00:37.577739  296020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:00:37.585320  296020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:00:37.592659  296020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:00:37.721273  296020 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:00:37.812117  296020 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:00:37.812183  296020 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:00:37.815932  296020 start.go:564] Will wait 60s for crictl version
	I1210 07:00:37.815991  296020 ssh_runner.go:195] Run: which crictl
	I1210 07:00:37.819381  296020 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:00:37.842923  296020 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:00:37.842993  296020 ssh_runner.go:195] Run: containerd --version
	I1210 07:00:37.862565  296020 ssh_runner.go:195] Run: containerd --version
	I1210 07:00:37.887310  296020 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 07:00:37.890224  296020 cli_runner.go:164] Run: docker network inspect no-preload-320236 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:00:37.905602  296020 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:00:37.909066  296020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:00:37.918252  296020 kubeadm.go:884] updating cluster {Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:00:37.918438  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:38.069274  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:38.216468  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:38.360305  296020 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:00:38.360402  296020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:00:38.384995  296020 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:00:38.385019  296020 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:00:38.385028  296020 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 07:00:38.385169  296020 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320236 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:00:38.385237  296020 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:00:38.412034  296020 cni.go:84] Creating CNI manager for ""
	I1210 07:00:38.412063  296020 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:00:38.412085  296020 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:00:38.412108  296020 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320236 NodeName:no-preload-320236 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:00:38.412227  296020 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-320236"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:00:38.412299  296020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:00:38.421091  296020 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:00:38.421163  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:00:38.429922  296020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 07:00:38.443653  296020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:00:38.457014  296020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:00:38.471955  296020 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:00:38.475882  296020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:00:38.485504  296020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:00:38.595895  296020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:00:38.612585  296020 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236 for IP: 192.168.85.2
	I1210 07:00:38.612609  296020 certs.go:195] generating shared ca certs ...
	I1210 07:00:38.612627  296020 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:38.612815  296020 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 07:00:38.612878  296020 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 07:00:38.612890  296020 certs.go:257] generating profile certs ...
	I1210 07:00:38.612999  296020 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.key
	I1210 07:00:38.613070  296020 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key.2faa2447
	I1210 07:00:38.613137  296020 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key
	I1210 07:00:38.613277  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 07:00:38.613326  296020 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 07:00:38.613338  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:00:38.613368  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:00:38.613404  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:00:38.613433  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 07:00:38.613490  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:00:38.614212  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:00:38.631972  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:00:38.649467  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:00:38.666377  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:00:38.686373  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:00:38.703781  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:00:38.723153  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:00:38.740812  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:00:38.758333  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 07:00:38.775839  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 07:00:38.793284  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:00:38.810326  296020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:00:38.822556  296020 ssh_runner.go:195] Run: openssl version
	I1210 07:00:38.829436  296020 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.836724  296020 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:00:38.844002  296020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.847779  296020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.847843  296020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.893925  296020 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:00:38.901463  296020 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.909031  296020 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 07:00:38.916756  296020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.920591  296020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.920655  296020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.962196  296020 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:00:38.969616  296020 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 07:00:38.976917  296020 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 07:00:38.984547  296020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 07:00:38.988142  296020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 07:00:38.988227  296020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 07:00:39.029601  296020 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:00:39.037081  296020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:00:39.040891  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:00:39.082809  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:00:39.123802  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:00:39.170233  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:00:39.211599  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:00:39.252658  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:00:39.293664  296020 kubeadm.go:401] StartCluster: {Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:00:39.293761  296020 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:00:39.293833  296020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:00:39.326465  296020 cri.go:89] found id: ""
	I1210 07:00:39.326535  296020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:00:39.334044  296020 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:00:39.334065  296020 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:00:39.334134  296020 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:00:39.341326  296020 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:00:39.341712  296020 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-320236" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:00:39.341813  296020 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-320236" cluster setting kubeconfig missing "no-preload-320236" context setting]
	I1210 07:00:39.342066  296020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:39.343566  296020 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:00:39.351071  296020 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 07:00:39.351101  296020 kubeadm.go:602] duration metric: took 17.030813ms to restartPrimaryControlPlane
	I1210 07:00:39.351110  296020 kubeadm.go:403] duration metric: took 57.455602ms to StartCluster
	I1210 07:00:39.351126  296020 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:39.351186  296020 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:00:39.351790  296020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:39.351984  296020 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:00:39.352290  296020 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:00:39.352337  296020 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:00:39.352428  296020 addons.go:70] Setting storage-provisioner=true in profile "no-preload-320236"
	I1210 07:00:39.352444  296020 addons.go:239] Setting addon storage-provisioner=true in "no-preload-320236"
	I1210 07:00:39.352451  296020 addons.go:70] Setting dashboard=true in profile "no-preload-320236"
	I1210 07:00:39.352465  296020 host.go:66] Checking if "no-preload-320236" exists ...
	I1210 07:00:39.352474  296020 addons.go:239] Setting addon dashboard=true in "no-preload-320236"
	W1210 07:00:39.352482  296020 addons.go:248] addon dashboard should already be in state true
	I1210 07:00:39.352506  296020 host.go:66] Checking if "no-preload-320236" exists ...
	I1210 07:00:39.352930  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.353043  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.353336  296020 addons.go:70] Setting default-storageclass=true in profile "no-preload-320236"
	I1210 07:00:39.353358  296020 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320236"
	I1210 07:00:39.353631  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.356443  296020 out.go:179] * Verifying Kubernetes components...
	I1210 07:00:39.359604  296020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:00:39.392662  296020 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:00:39.395653  296020 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:00:39.398571  296020 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:00:39.398592  296020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:00:39.398654  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:39.398779  296020 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:00:39.401749  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:00:39.401779  296020 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:00:39.401844  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:39.412459  296020 addons.go:239] Setting addon default-storageclass=true in "no-preload-320236"
	I1210 07:00:39.412502  296020 host.go:66] Checking if "no-preload-320236" exists ...
	I1210 07:00:39.412911  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.451209  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:39.451232  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:39.471156  296020 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:00:39.471176  296020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:00:39.471241  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:39.496650  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:39.601190  296020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:00:39.614141  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:00:39.645005  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:00:39.645028  296020 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:00:39.654222  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:00:39.665638  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:00:39.665659  296020 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:00:39.712904  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:00:39.712926  296020 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:00:39.726749  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:00:39.726772  296020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:00:39.740856  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:00:39.740877  296020 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:00:39.756673  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:00:39.756740  296020 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:00:39.769276  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:00:39.769343  296020 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:00:39.781575  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:00:39.781598  296020 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:00:39.794119  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:00:39.794141  296020 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:00:39.806448  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:40.411601  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.411697  296020 retry.go:31] will retry after 364.307231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:40.411787  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.411824  296020 retry.go:31] will retry after 175.448245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:40.412081  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.412126  296020 retry.go:31] will retry after 340.80415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.412177  296020 node_ready.go:35] waiting up to 6m0s for node "no-preload-320236" to be "Ready" ...
	I1210 07:00:40.587992  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:40.644838  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.644918  296020 retry.go:31] will retry after 280.859873ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.754069  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:00:40.776546  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:40.828821  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.828916  296020 retry.go:31] will retry after 208.166646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:40.845124  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.845178  296020 retry.go:31] will retry after 309.037844ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.926770  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:40.985165  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.985193  296020 retry.go:31] will retry after 576.96991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.037550  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:41.099191  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.099230  296020 retry.go:31] will retry after 760.269809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.154571  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:41.223133  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.223166  296020 retry.go:31] will retry after 384.5048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.563176  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:00:41.607812  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:41.634200  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.634229  296020 retry.go:31] will retry after 958.895789ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:41.670372  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.670408  296020 retry.go:31] will retry after 1.242104692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.860733  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:41.944937  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.944981  296020 retry.go:31] will retry after 1.203859969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:42.412917  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:42.594314  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:42.653050  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:42.653087  296020 retry.go:31] will retry after 1.019515228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:42.912735  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:42.992543  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:42.992575  296020 retry.go:31] will retry after 1.525694084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.149942  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:43.215395  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.215430  296020 retry.go:31] will retry after 1.081952772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.673229  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:43.753817  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.753847  296020 retry.go:31] will retry after 2.453351659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:44.297966  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:44.359469  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:44.359502  296020 retry.go:31] will retry after 2.437831877s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:44.413141  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:44.518419  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:44.578484  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:44.578514  296020 retry.go:31] will retry after 2.525951728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:46.207448  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:46.269857  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:46.269893  296020 retry.go:31] will retry after 2.493371842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:46.413377  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:46.798249  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:46.865016  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:46.865052  296020 retry.go:31] will retry after 1.595518707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:47.104732  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:47.167159  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:47.167199  296020 retry.go:31] will retry after 2.421365807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.461029  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:48.523416  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.523451  296020 retry.go:31] will retry after 5.045916415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.763783  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:48.826893  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.826925  296020 retry.go:31] will retry after 2.901964551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:48.913552  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:49.589035  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:49.649801  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:49.649838  296020 retry.go:31] will retry after 4.385171631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:50.913785  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:51.729508  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:51.789192  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:51.789222  296020 retry.go:31] will retry after 4.971484132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:53.412679  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:53.570118  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:53.628103  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:53.628135  296020 retry.go:31] will retry after 4.154709683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:54.035994  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:54.099925  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:54.099959  296020 retry.go:31] will retry after 5.104591633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:55.413548  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:56.761591  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:56.827407  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:56.827438  296020 retry.go:31] will retry after 6.353816854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:57.783555  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:57.845429  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:57.845462  296020 retry.go:31] will retry after 8.667848959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:57.912770  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:59.205067  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:59.264096  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:59.264126  296020 retry.go:31] will retry after 10.603627722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:59.912812  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:01.913336  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:03.181966  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:03.241570  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:03.241608  296020 retry.go:31] will retry after 19.837023952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:04.412784  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:06.413759  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:06.515688  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:06.581717  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:06.581752  296020 retry.go:31] will retry after 20.713933736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:08.913557  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:09.868219  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:01:09.930350  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:09.930381  296020 retry.go:31] will retry after 16.670877723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:11.413714  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:13.913676  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:16.413698  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:18.913533  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:21.413576  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:23.079136  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:23.142459  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:23.142490  296020 retry.go:31] will retry after 12.673593141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:23.913225  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:25.913289  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:26.601791  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:01:26.668541  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:26.668575  296020 retry.go:31] will retry after 21.28734842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:27.295978  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:27.360758  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:27.360795  296020 retry.go:31] will retry after 15.710281845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:27.913387  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:29.913460  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:31.913645  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:33.913718  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:35.816320  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:35.874198  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:35.874230  296020 retry.go:31] will retry after 21.376325369s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:36.412713  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:38.412808  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:40.913670  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:42.913819  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:43.072120  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:43.135982  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:43.136017  296020 retry.go:31] will retry after 16.570147181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:45.412747  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:47.913625  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:47.956911  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:01:48.019680  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:48.019731  296020 retry.go:31] will retry after 28.501835741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:49.913722  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:52.412735  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:54.913738  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:57.251364  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:57.311036  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:57.311129  296020 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:01:57.412814  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:59.413910  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:59.706314  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:59.768631  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:59.768667  296020 retry.go:31] will retry after 38.033263553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:02:01.912954  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:03.913786  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:06.413305  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:08.413743  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:10.913528  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:12.913703  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:15.412647  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:02:16.522068  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:02:16.598309  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:02:16.598419  296020 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:02:17.412833  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:19.413691  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:21.912889  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:24.412677  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:26.412851  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:28.413641  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:30.913357  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:32.913501  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:34.913596  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:37.412726  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:02:37.802376  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:02:37.868725  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:02:37.868813  296020 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:02:37.871969  296020 out.go:179] * Enabled addons: 
	I1210 07:02:37.875533  296020 addons.go:530] duration metric: took 1m58.523193068s for enable addons: enabled=[]
	W1210 07:02:39.412868  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:41.913602  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:43.913738  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:46.413639  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:48.913771  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:51.412644  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:53.413621  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:55.913703  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:57.913798  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:00.413728  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:02.912729  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:04.913512  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:06.913772  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:09.412627  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:11.412767  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:13.913613  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:15.913755  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:18.412757  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:20.413704  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:22.913503  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:24.913715  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:27.412717  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:29.412799  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:31.912720  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:33.913761  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:36.413520  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:38.413644  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:40.913728  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:43.412693  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:45.413603  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:47.413648  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:49.913012  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:52.413627  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:54.913602  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:57.413653  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:59.912671  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:01.913548  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:03.913688  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:06.413733  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:08.912736  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:10.913633  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:13.412730  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:15.413663  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:17.912660  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:19.913717  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:22.412670  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:24.413374  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:26.413544  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:28.413623  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:30.913676  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:33.413655  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:35.413770  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:37.913476  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:40.413253  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:42.413440  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:44.913537  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:47.413752  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:49.913615  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:52.412854  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:54.413630  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:56.912733  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:58.913708  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:01.413338  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:03.413683  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:05.413757  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:07.912695  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:09.913684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:12.413665  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:14.913629  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:17.412756  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:19.413671  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:21.913774  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:24.413622  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:26.413733  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:28.913593  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:30.913649  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:33.413576  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:35.913684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:38.412716  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:40.913647  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:43.413725  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:45.414529  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:47.913666  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:50.412769  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:52.413724  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:54.913650  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:57.413623  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:59.413665  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:01.914114  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:04.412714  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:06.413234  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:08.413486  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:10.912684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:12.913304  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:15.412868  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:17.413684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:19.913294  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:21.913508  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:23.913605  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:26.413204  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:28.413461  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:30.913644  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:33.413489  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:35.913510  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:38.413592  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:40.413124  296020 node_ready.go:38] duration metric: took 6m0.00088218s for node "no-preload-320236" to be "Ready" ...
	I1210 07:06:40.416430  296020 out.go:203] 
	W1210 07:06:40.419386  296020 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:06:40.419405  296020 out.go:285] * 
	* 
	W1210 07:06:40.421537  296020 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:06:40.424792  296020 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-320236
helpers_test.go:244: (dbg) docker inspect no-preload-320236:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	        "Created": "2025-12-10T06:50:11.529745127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296159,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:00:31.906944272Z",
	            "FinishedAt": "2025-12-10T07:00:30.524095791Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hostname",
	        "HostsPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hosts",
	        "LogPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df-json.log",
	        "Name": "/no-preload-320236",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-320236:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-320236",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	                "LowerDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-320236",
	                "Source": "/var/lib/docker/volumes/no-preload-320236/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-320236",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-320236",
	                "name.minikube.sigs.k8s.io": "no-preload-320236",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be5eb1503ed127ef0c2d044ffb245c38ab2a7657e10a797a5912ae4059c29e3f",
	            "SandboxKey": "/var/run/docker/netns/be5eb1503ed1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-320236": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:26:8b:69:77:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ffc647756423b4e81a1338ec5e1d5f1765ed1034ce5ae186c5fbcc84bf8cb09",
	                    "EndpointID": "31d9f19780654066d5dbb87109e480cce007c3d0fa04a397a4cec6b92d85ea58",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-320236",
	                        "afeff1ab0e38"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236: exit status 2 (421.789149ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-320236 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ image   │ embed-certs-451123 image list --format=json                                                                                                                                                                                                              │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ pause   │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ unpause │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p disable-driver-mounts-595993                                                                                                                                                                                                                          │ disable-driver-mounts-595993 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ stop    │ -p default-k8s-diff-port-395269 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-395269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:55 UTC │
	│ image   │ default-k8s-diff-port-395269 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ pause   │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ unpause │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-320236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │                     │
	│ stop    │ -p no-preload-320236 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ addons  │ enable dashboard -p no-preload-320236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ start   │ -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-168808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:04 UTC │                     │
	│ stop    │ -p newest-cni-168808 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │ 10 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-168808 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:06 UTC │ 10 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:06:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:06:00.999721  303437 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:06:00.999928  303437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:06:00.999941  303437 out.go:374] Setting ErrFile to fd 2...
	I1210 07:06:00.999948  303437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:06:01.000291  303437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 07:06:01.000840  303437 out.go:368] Setting JSON to false
	I1210 07:06:01.001958  303437 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6511,"bootTime":1765343850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:06:01.002049  303437 start.go:143] virtualization:  
	I1210 07:06:01.005229  303437 out.go:179] * [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:06:01.009127  303437 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:06:01.009191  303437 notify.go:221] Checking for updates...
	I1210 07:06:01.015115  303437 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:06:01.018047  303437 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:01.021396  303437 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 07:06:01.024347  303437 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:06:01.027298  303437 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:06:01.030670  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:01.031359  303437 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:06:01.059280  303437 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:06:01.059409  303437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:06:01.117784  303437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:06:01.1083965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:06:01.117913  303437 docker.go:319] overlay module found
	I1210 07:06:01.121244  303437 out.go:179] * Using the docker driver based on existing profile
	I1210 07:06:01.124129  303437 start.go:309] selected driver: docker
	I1210 07:06:01.124152  303437 start.go:927] validating driver "docker" against &{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:01.124257  303437 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:06:01.124971  303437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:06:01.177684  303437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:06:01.168448125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:06:01.178039  303437 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:06:01.178072  303437 cni.go:84] Creating CNI manager for ""
	I1210 07:06:01.178124  303437 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:06:01.178165  303437 start.go:353] cluster config:
	{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:01.183109  303437 out.go:179] * Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	I1210 07:06:01.185906  303437 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:06:01.188882  303437 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:06:01.191653  303437 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:06:01.191725  303437 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:06:01.211624  303437 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:06:01.211647  303437 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:06:01.245655  303437 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 07:06:01.410333  303437 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 07:06:01.410482  303437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 07:06:01.410710  303437 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:06:01.410741  303437 start.go:360] acquireMachinesLock for newest-cni-168808: {Name:mk3e9e7ddd89f37944cb01c45725514e76e5ba82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:01.410794  303437 start.go:364] duration metric: took 32.001µs to acquireMachinesLock for "newest-cni-168808"
	I1210 07:06:01.410811  303437 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:06:01.410817  303437 fix.go:54] fixHost starting: 
	I1210 07:06:01.411108  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:01.411381  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.445269  303437 fix.go:112] recreateIfNeeded on newest-cni-168808: state=Stopped err=<nil>
	W1210 07:06:01.445299  303437 fix.go:138] unexpected machine state, will restart: <nil>
	W1210 07:05:57.413623  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:59.413665  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:01.448589  303437 out.go:252] * Restarting existing docker container for "newest-cni-168808" ...
	I1210 07:06:01.448678  303437 cli_runner.go:164] Run: docker start newest-cni-168808
	I1210 07:06:01.609744  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.770299  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:01.790186  303437 kic.go:430] container "newest-cni-168808" state is running.
	I1210 07:06:01.790574  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:01.816467  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.816783  303437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 07:06:01.816990  303437 machine.go:94] provisionDockerMachine start ...
	I1210 07:06:01.817053  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:01.864829  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:01.865171  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:01.865181  303437 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:06:01.865918  303437 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:06:02.031349  303437 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031449  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:06:02.031458  303437 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.682µs
	I1210 07:06:02.031466  303437 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:06:02.031488  303437 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031520  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:06:02.031525  303437 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 49.765µs
	I1210 07:06:02.031536  303437 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031546  303437 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031572  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:06:02.031577  303437 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 32µs
	I1210 07:06:02.031583  303437 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031592  303437 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031616  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:06:02.031621  303437 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 30.351µs
	I1210 07:06:02.031626  303437 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031635  303437 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031658  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 07:06:02.031663  303437 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 29.047µs
	I1210 07:06:02.031668  303437 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031676  303437 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031702  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:06:02.031711  303437 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.042µs
	I1210 07:06:02.031716  303437 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:06:02.031725  303437 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031752  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 07:06:02.031757  303437 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 32.509µs
	I1210 07:06:02.031762  303437 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 07:06:02.031770  303437 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031794  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:06:02.031799  303437 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.973µs
	I1210 07:06:02.031809  303437 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:06:02.031817  303437 cache.go:87] Successfully saved all images to host disk.
	I1210 07:06:05.019038  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 07:06:05.019065  303437 ubuntu.go:182] provisioning hostname "newest-cni-168808"
	I1210 07:06:05.019142  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.038167  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:05.038497  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:05.038514  303437 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-168808 && echo "newest-cni-168808" | sudo tee /etc/hostname
	I1210 07:06:05.212495  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 07:06:05.212574  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.236676  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:05.236997  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:05.237020  303437 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-168808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-168808/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-168808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:06:05.387591  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:06:05.387661  303437 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 07:06:05.387701  303437 ubuntu.go:190] setting up certificates
	I1210 07:06:05.387718  303437 provision.go:84] configureAuth start
	I1210 07:06:05.387781  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:05.406720  303437 provision.go:143] copyHostCerts
	I1210 07:06:05.406812  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 07:06:05.406827  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 07:06:05.406903  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 07:06:05.407068  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 07:06:05.407080  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 07:06:05.407115  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 07:06:05.409257  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 07:06:05.409288  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 07:06:05.409367  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 07:06:05.409470  303437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-168808 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-168808]
	I1210 07:06:05.457283  303437 provision.go:177] copyRemoteCerts
	I1210 07:06:05.457369  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:06:05.457416  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.474754  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.578879  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:06:05.596686  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:06:05.614316  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:06:05.632529  303437 provision.go:87] duration metric: took 244.787433ms to configureAuth
	I1210 07:06:05.632557  303437 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:06:05.632770  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:05.632780  303437 machine.go:97] duration metric: took 3.815782677s to provisionDockerMachine
	I1210 07:06:05.632794  303437 start.go:293] postStartSetup for "newest-cni-168808" (driver="docker")
	I1210 07:06:05.632814  303437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:06:05.632866  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:06:05.632909  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.651511  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.755084  303437 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:06:05.758541  303437 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:06:05.758569  303437 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:06:05.758581  303437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 07:06:05.758636  303437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 07:06:05.758716  303437 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 07:06:05.758818  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:06:05.766638  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:06:05.784153  303437 start.go:296] duration metric: took 151.337167ms for postStartSetup
	I1210 07:06:05.784245  303437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:06:05.784296  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.801680  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.903956  303437 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:06:05.910414  303437 fix.go:56] duration metric: took 4.499590898s for fixHost
	I1210 07:06:05.910487  303437 start.go:83] releasing machines lock for "newest-cni-168808", held for 4.499684126s
	I1210 07:06:05.910597  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:05.931294  303437 ssh_runner.go:195] Run: cat /version.json
	I1210 07:06:05.931352  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.933029  303437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:06:05.933104  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.966773  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.968660  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	W1210 07:06:01.914114  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:04.412714  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:06.413234  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:06.164421  303437 ssh_runner.go:195] Run: systemctl --version
	I1210 07:06:06.170684  303437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:06:06.174920  303437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:06:06.174984  303437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:06:06.182557  303437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:06:06.182578  303437 start.go:496] detecting cgroup driver to use...
	I1210 07:06:06.182611  303437 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:06:06.182660  303437 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:06:06.200334  303437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:06:06.213740  303437 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:06:06.213811  303437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:06:06.229308  303437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:06:06.242262  303437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:06:06.362603  303437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:06:06.483045  303437 docker.go:234] disabling docker service ...
	I1210 07:06:06.483112  303437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:06:06.498250  303437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:06:06.511747  303437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:06:06.628460  303437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:06:06.766872  303437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:06:06.779978  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:06:06.794352  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:06.943808  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:06:06.954116  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:06:06.962677  303437 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:06:06.962740  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:06:06.971255  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:06:06.980030  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:06:06.988476  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:06:07.007850  303437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:06:07.016475  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:06:07.025456  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:06:07.034855  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:06:07.044266  303437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:06:07.052503  303437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:06:07.060278  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:07.175410  303437 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:06:07.276715  303437 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:06:07.276786  303437 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:06:07.280624  303437 start.go:564] Will wait 60s for crictl version
	I1210 07:06:07.280687  303437 ssh_runner.go:195] Run: which crictl
	I1210 07:06:07.284270  303437 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:06:07.312279  303437 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:06:07.312345  303437 ssh_runner.go:195] Run: containerd --version
	I1210 07:06:07.332603  303437 ssh_runner.go:195] Run: containerd --version
	I1210 07:06:07.358017  303437 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 07:06:07.360940  303437 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:06:07.377362  303437 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:06:07.381128  303437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:06:07.393654  303437 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:06:07.396326  303437 kubeadm.go:884] updating cluster {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:06:07.396576  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.559787  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.709730  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.859001  303437 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:06:07.859128  303437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:06:07.883821  303437 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:06:07.883846  303437 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:06:07.883855  303437 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 07:06:07.883958  303437 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-168808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:06:07.884031  303437 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:06:07.913929  303437 cni.go:84] Creating CNI manager for ""
	I1210 07:06:07.913952  303437 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:06:07.913973  303437 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:06:07.913999  303437 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-168808 NodeName:newest-cni-168808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:06:07.914120  303437 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-168808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:06:07.914189  303437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:06:07.921856  303437 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:06:07.921924  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:06:07.929166  303437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 07:06:07.941324  303437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:06:07.954047  303437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1210 07:06:07.966208  303437 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:06:07.969747  303437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:06:07.979238  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:08.094271  303437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:06:08.111901  303437 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808 for IP: 192.168.76.2
	I1210 07:06:08.111935  303437 certs.go:195] generating shared ca certs ...
	I1210 07:06:08.111952  303437 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.112156  303437 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 07:06:08.112239  303437 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 07:06:08.112261  303437 certs.go:257] generating profile certs ...
	I1210 07:06:08.112411  303437 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key
	I1210 07:06:08.112508  303437 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb
	I1210 07:06:08.112594  303437 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key
	I1210 07:06:08.112776  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 07:06:08.112825  303437 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 07:06:08.112863  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:06:08.112899  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:06:08.112950  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:06:08.112979  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 07:06:08.113053  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:06:08.113737  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:06:08.131868  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:06:08.149347  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:06:08.173211  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:06:08.201112  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:06:08.217931  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:06:08.234927  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:06:08.255525  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:06:08.274117  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 07:06:08.291924  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:06:08.309223  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 07:06:08.326082  303437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:06:08.338602  303437 ssh_runner.go:195] Run: openssl version
	I1210 07:06:08.345277  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.353152  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:06:08.360717  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.364534  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.364612  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.406623  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:06:08.414672  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.422361  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 07:06:08.430022  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.433878  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.433973  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.475572  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:06:08.483285  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.491000  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 07:06:08.498512  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.502241  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.502306  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.543558  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:06:08.551469  303437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:06:08.555461  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:06:08.597134  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:06:08.638002  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:06:08.678965  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:06:08.720427  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:06:08.763492  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:06:08.809518  303437 kubeadm.go:401] StartCluster: {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:08.809633  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:06:08.809696  303437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:06:08.836487  303437 cri.go:89] found id: ""
	I1210 07:06:08.836609  303437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:06:08.844505  303437 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:06:08.844525  303437 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:06:08.844604  303437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:06:08.852026  303437 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:06:08.852667  303437 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-168808" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:08.852944  303437 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-168808" cluster setting kubeconfig missing "newest-cni-168808" context setting]
	I1210 07:06:08.853395  303437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.854743  303437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:06:08.863687  303437 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 07:06:08.863719  303437 kubeadm.go:602] duration metric: took 19.187765ms to restartPrimaryControlPlane
	I1210 07:06:08.863729  303437 kubeadm.go:403] duration metric: took 54.219605ms to StartCluster
	I1210 07:06:08.863764  303437 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.863854  303437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:08.864943  303437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.865201  303437 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:06:08.865553  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:08.865626  303437 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:06:08.865710  303437 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-168808"
	I1210 07:06:08.865725  303437 addons.go:70] Setting dashboard=true in profile "newest-cni-168808"
	I1210 07:06:08.865738  303437 addons.go:70] Setting default-storageclass=true in profile "newest-cni-168808"
	I1210 07:06:08.865748  303437 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-168808"
	I1210 07:06:08.865755  303437 addons.go:239] Setting addon dashboard=true in "newest-cni-168808"
	W1210 07:06:08.865763  303437 addons.go:248] addon dashboard should already be in state true
	I1210 07:06:08.865787  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.866234  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.865732  303437 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-168808"
	I1210 07:06:08.866264  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.866892  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.866245  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.870618  303437 out.go:179] * Verifying Kubernetes components...
	I1210 07:06:08.877218  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:08.909365  303437 addons.go:239] Setting addon default-storageclass=true in "newest-cni-168808"
	I1210 07:06:08.909422  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.909955  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.935168  303437 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:06:08.938081  303437 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:06:08.938245  303437 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:06:08.941690  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:06:08.941720  303437 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:06:08.941756  303437 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:08.941772  303437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:06:08.941809  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:08.941835  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:08.974920  303437 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:08.974945  303437 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:06:08.975007  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:09.018425  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.019111  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.028670  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.182128  303437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:06:09.189848  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:09.218621  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:06:09.218696  303437 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:06:09.233237  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:09.248580  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:06:09.248655  303437 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:06:09.280152  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:06:09.280225  303437 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:06:09.294171  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:06:09.294239  303437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:06:09.308986  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:06:09.309057  303437 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:06:09.323118  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:06:09.323195  303437 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:06:09.337212  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:06:09.337284  303437 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:06:09.351939  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:06:09.352006  303437 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:06:09.364684  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:09.364749  303437 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:06:09.377472  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:09.912036  303437 api_server.go:52] waiting for apiserver process to appear ...
	W1210 07:06:09.912102  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912165  303437 retry.go:31] will retry after 137.554553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:09.912180  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912239  303437 retry.go:31] will retry after 162.08127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912111  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:09.912371  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912391  303437 retry.go:31] will retry after 156.096194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.049986  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:10.068682  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:10.075250  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:10.139495  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.139526  303437 retry.go:31] will retry after 525.238587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.196161  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.196246  303437 retry.go:31] will retry after 422.355289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.196206  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.196316  303437 retry.go:31] will retry after 388.387448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.412254  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:10.585608  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:10.619095  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:10.648889  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.648984  303437 retry.go:31] will retry after 452.281973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.665111  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:10.718838  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.718922  303437 retry.go:31] will retry after 323.626302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.751170  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.751201  303437 retry.go:31] will retry after 426.205037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.912296  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:08.413486  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:10.912684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:11.043189  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:11.101706  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:11.108011  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.108097  303437 retry.go:31] will retry after 465.500211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:11.171627  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.171733  303437 retry.go:31] will retry after 644.635053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.177835  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:11.248736  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.248773  303437 retry.go:31] will retry after 646.277835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.413044  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:11.574386  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:11.635719  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.635755  303437 retry.go:31] will retry after 992.827501ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.816838  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:11.874310  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.874341  303437 retry.go:31] will retry after 847.092889ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.895446  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:11.912890  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:11.979233  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.979274  303437 retry.go:31] will retry after 1.723803171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.412929  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:12.629708  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:12.711328  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.711402  303437 retry.go:31] will retry after 1.682909305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.721580  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:12.787715  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.787755  303437 retry.go:31] will retry after 1.523563907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.912980  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:13.412270  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:13.704137  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:13.769291  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:13.769319  303437 retry.go:31] will retry after 2.655752177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:13.912604  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:14.312036  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:14.379977  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.380010  303437 retry.go:31] will retry after 2.120509482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.395420  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:14.412979  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:14.494970  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.495005  303437 retry.go:31] will retry after 2.083776468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.913027  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:15.412429  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:15.912376  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:12.913304  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:15.412868  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:16.412255  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:16.425325  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:16.500296  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.500325  303437 retry.go:31] will retry after 1.753545178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.501400  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:16.562473  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.562506  303437 retry.go:31] will retry after 5.63085781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.579894  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:16.640721  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.640756  303437 retry.go:31] will retry after 2.710169887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.912245  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:17.412350  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:17.913142  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:18.254741  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:18.317147  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:18.317176  303437 retry.go:31] will retry after 6.057763532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:18.412287  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:18.912752  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:19.352062  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:19.412870  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:19.413382  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:19.413410  303437 retry.go:31] will retry after 6.763226999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:19.913016  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:20.412997  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:20.913098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:17.413684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:19.913294  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:21.412278  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:21.913122  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:22.194391  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:22.251091  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:22.251123  303437 retry.go:31] will retry after 9.11395006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:22.412163  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:22.912351  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:23.412284  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:23.913156  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:24.375236  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:24.412827  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:24.440293  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:24.440322  303437 retry.go:31] will retry after 9.4401753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:24.912889  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:25.412233  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:25.912307  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:21.913508  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:23.913605  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:26.413204  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:26.177306  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:26.250932  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:26.250965  303437 retry.go:31] will retry after 5.997165797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:26.412268  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:26.912230  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:27.412900  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:27.912402  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:28.412186  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:28.912521  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:29.412227  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:29.912255  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:30.413237  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:30.912254  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:28.413461  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:30.913644  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:31.366162  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:31.412559  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:31.439835  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:31.439865  303437 retry.go:31] will retry after 9.181638872s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:31.912411  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:32.248486  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:32.313416  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:32.313450  303437 retry.go:31] will retry after 9.93876945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:32.412880  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:32.912746  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:33.412590  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:33.880694  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:33.912312  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:33.964338  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:33.964372  303437 retry.go:31] will retry after 6.698338092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:34.413098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:34.912991  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:35.413188  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:35.912404  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:33.413489  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:35.913510  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:38.413592  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:40.413124  296020 node_ready.go:38] duration metric: took 6m0.00088218s for node "no-preload-320236" to be "Ready" ...
	I1210 07:06:40.416430  296020 out.go:203] 
	W1210 07:06:40.419386  296020 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:06:40.419405  296020 out.go:285] * 
	W1210 07:06:40.421537  296020 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:06:40.424792  296020 out.go:203] 
	I1210 07:06:36.412320  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:36.912280  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:37.412192  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:37.912490  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:38.412402  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:38.912902  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:39.412781  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:39.912868  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:40.413057  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:40.621960  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:40.663144  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:40.779058  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.779095  303437 retry.go:31] will retry after 16.870406936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:40.830377  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.830410  303437 retry.go:31] will retry after 13.844749205s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.912652  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777113414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777127404Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777160594Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777174535Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777184742Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777195950Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777205197Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777215487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777231528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777260304Z" level=info msg="Connect containerd service"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777515527Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.778069290Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.789502105Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.789748787Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.789677541Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.795087082Z" level=info msg="Start recovering state"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.809745847Z" level=info msg="Start event monitor"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.809929530Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810001120Z" level=info msg="Start streaming server"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810060181Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810114328Z" level=info msg="runtime interface starting up..."
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810165307Z" level=info msg="starting plugins..."
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810240475Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:00:37 no-preload-320236 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.811841962Z" level=info msg="containerd successfully booted in 0.055335s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:06:41.670831    3935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:06:41.671559    3935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:06:41.673295    3935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:06:41.673658    3935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:06:41.675426    3935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	
	
	==> kernel <==
	 07:06:41 up  1:49,  0 user,  load average: 0.36, 0.65, 1.35
	Linux no-preload-320236 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:06:38 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:06:38 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 10 07:06:38 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:06:38 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:06:38 no-preload-320236 kubelet[3813]: E1210 07:06:38.960142    3813 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:06:38 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:06:38 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:06:39 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 10 07:06:39 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:06:39 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:06:39 no-preload-320236 kubelet[3819]: E1210 07:06:39.705629    3819 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:06:39 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:06:39 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:06:40 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 10 07:06:40 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:06:40 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:06:40 no-preload-320236 kubelet[3825]: E1210 07:06:40.509996    3825 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:06:40 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:06:40 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:06:41 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 10 07:06:41 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:06:41 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:06:41 no-preload-320236 kubelet[3846]: E1210 07:06:41.225368    3846 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:06:41 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:06:41 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236: exit status 2 (368.63995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-320236" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (96.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-168808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1210 07:04:47.372903    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-168808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.279052869s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-168808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-168808
helpers_test.go:244: (dbg) docker inspect newest-cni-168808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3",
	        "Created": "2025-12-10T06:55:56.205654512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288372,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:55:56.278762999Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/hosts",
	        "LogPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3-json.log",
	        "Name": "/newest-cni-168808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-168808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-168808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3",
	                "LowerDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-168808",
	                "Source": "/var/lib/docker/volumes/newest-cni-168808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-168808",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-168808",
	                "name.minikube.sigs.k8s.io": "newest-cni-168808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8dc8a0bd8d67f970fd6ee9f5185b3999f597162904a68c34b61526eb2bb5352e",
	            "SandboxKey": "/var/run/docker/netns/8dc8a0bd8d67",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-168808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:0a:53:b3:10:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fedd4ad26097ebf6757101ef8e22a141acd4ba740aa95d5f1eab7ffc232007f5",
	                    "EndpointID": "32d7243a0bf1738641a18a9cb935e90041c7084e02ec3035ddaf5ac35cf4ef4b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-168808",
	                        "7d1db3aa80a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808: exit status 6 (350.325381ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:05:58.211217  302911 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-168808" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-168808 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ stop    │ -p embed-certs-451123 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-451123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:52 UTC │
	│ start   │ -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:52 UTC │ 10 Dec 25 06:53 UTC │
	│ image   │ embed-certs-451123 image list --format=json                                                                                                                                                                                                              │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ pause   │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ unpause │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p disable-driver-mounts-595993                                                                                                                                                                                                                          │ disable-driver-mounts-595993 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ stop    │ -p default-k8s-diff-port-395269 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-395269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:55 UTC │
	│ image   │ default-k8s-diff-port-395269 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ pause   │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ unpause │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-320236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │                     │
	│ stop    │ -p no-preload-320236 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ addons  │ enable dashboard -p no-preload-320236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ start   │ -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-168808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:04 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:00:31
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:00:31.606607  296020 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:00:31.606726  296020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:00:31.606763  296020 out.go:374] Setting ErrFile to fd 2...
	I1210 07:00:31.606781  296020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:00:31.607068  296020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 07:00:31.607446  296020 out.go:368] Setting JSON to false
	I1210 07:00:31.608351  296020 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6182,"bootTime":1765343850,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:00:31.608452  296020 start.go:143] virtualization:  
	I1210 07:00:31.611858  296020 out.go:179] * [no-preload-320236] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:00:31.616135  296020 notify.go:221] Checking for updates...
	I1210 07:00:31.616625  296020 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:00:31.619795  296020 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:00:31.622704  296020 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:00:31.625649  296020 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 07:00:31.628623  296020 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:00:31.632108  296020 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:00:31.635513  296020 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:00:31.636082  296020 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:00:31.668430  296020 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:00:31.668544  296020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:00:31.757341  296020 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:00:31.748329892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:00:31.757451  296020 docker.go:319] overlay module found
	I1210 07:00:31.760519  296020 out.go:179] * Using the docker driver based on existing profile
	I1210 07:00:31.763315  296020 start.go:309] selected driver: docker
	I1210 07:00:31.763332  296020 start.go:927] validating driver "docker" against &{Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:00:31.763427  296020 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:00:31.764155  296020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:00:31.816369  296020 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:00:31.807572299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:00:31.816697  296020 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:00:31.816729  296020 cni.go:84] Creating CNI manager for ""
	I1210 07:00:31.816780  296020 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:00:31.816827  296020 start.go:353] cluster config:
	{Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:00:31.820155  296020 out.go:179] * Starting "no-preload-320236" primary control-plane node in "no-preload-320236" cluster
	I1210 07:00:31.823065  296020 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:00:31.825850  296020 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:00:31.828615  296020 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:00:31.828709  296020 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:00:31.828754  296020 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json ...
	I1210 07:00:31.829080  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:31.848090  296020 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:00:31.848110  296020 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 07:00:31.848126  296020 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:00:31.848157  296020 start.go:360] acquireMachinesLock for no-preload-320236: {Name:mk4a67a43519a7e8fad4432e15b5aa1fee295390 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:31.848210  296020 start.go:364] duration metric: took 35.34µs to acquireMachinesLock for "no-preload-320236"
	I1210 07:00:31.848227  296020 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:00:31.848233  296020 fix.go:54] fixHost starting: 
	I1210 07:00:31.848495  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:31.871386  296020 fix.go:112] recreateIfNeeded on no-preload-320236: state=Stopped err=<nil>
	W1210 07:00:31.871423  296020 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:00:31.874767  296020 out.go:252] * Restarting existing docker container for "no-preload-320236" ...
	I1210 07:00:31.874868  296020 cli_runner.go:164] Run: docker start no-preload-320236
	I1210 07:00:32.009251  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:32.156909  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:32.181453  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:32.182795  296020 kic.go:430] container "no-preload-320236" state is running.
	I1210 07:00:32.183209  296020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 07:00:32.232417  296020 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/config.json ...
	I1210 07:00:32.232635  296020 machine.go:94] provisionDockerMachine start ...
	I1210 07:00:32.232693  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:32.261256  296020 main.go:143] libmachine: Using SSH client type: native
	I1210 07:00:32.261589  296020 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 07:00:32.261598  296020 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:00:32.262750  296020 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:00:32.410295  296020 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410397  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:00:32.410406  296020 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.804µs
	I1210 07:00:32.410415  296020 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:00:32.410426  296020 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410466  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:00:32.410472  296020 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 47.402µs
	I1210 07:00:32.410478  296020 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410488  296020 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410538  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:00:32.410543  296020 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 57.051µs
	I1210 07:00:32.410550  296020 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410561  296020 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410587  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:00:32.410592  296020 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 32.222µs
	I1210 07:00:32.410597  296020 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410607  296020 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410641  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 07:00:32.410646  296020 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 40.46µs
	I1210 07:00:32.410652  296020 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:00:32.410666  296020 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410699  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:00:32.410704  296020 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 44.333µs
	I1210 07:00:32.410709  296020 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:00:32.410718  296020 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410744  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 07:00:32.410748  296020 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 31.541µs
	I1210 07:00:32.410754  296020 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 07:00:32.410763  296020 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:00:32.410800  296020 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:00:32.410805  296020 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 43.233µs
	I1210 07:00:32.410810  296020 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:00:32.410817  296020 cache.go:87] Successfully saved all images to host disk.
	I1210 07:00:35.415945  296020 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-320236
	
	I1210 07:00:35.415969  296020 ubuntu.go:182] provisioning hostname "no-preload-320236"
	I1210 07:00:35.416031  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:35.439002  296020 main.go:143] libmachine: Using SSH client type: native
	I1210 07:00:35.439495  296020 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 07:00:35.439512  296020 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-320236 && echo "no-preload-320236" | sudo tee /etc/hostname
	I1210 07:00:35.600226  296020 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-320236
	
	I1210 07:00:35.600320  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:35.617143  296020 main.go:143] libmachine: Using SSH client type: native
	I1210 07:00:35.617452  296020 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1210 07:00:35.617472  296020 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320236/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:00:35.771609  296020 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:00:35.771638  296020 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 07:00:35.771682  296020 ubuntu.go:190] setting up certificates
	I1210 07:00:35.771771  296020 provision.go:84] configureAuth start
	I1210 07:00:35.771846  296020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 07:00:35.791167  296020 provision.go:143] copyHostCerts
	I1210 07:00:35.791247  296020 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 07:00:35.791260  296020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 07:00:35.791339  296020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 07:00:35.791446  296020 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 07:00:35.791457  296020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 07:00:35.791485  296020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 07:00:35.791558  296020 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 07:00:35.791566  296020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 07:00:35.791595  296020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 07:00:35.791661  296020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.no-preload-320236 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-320236]
	I1210 07:00:36.056131  296020 provision.go:177] copyRemoteCerts
	I1210 07:00:36.056213  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:00:36.056259  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.074420  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.179259  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:00:36.197688  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:00:36.220673  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:00:36.237968  296020 provision.go:87] duration metric: took 466.169895ms to configureAuth
	I1210 07:00:36.237995  296020 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:00:36.238191  296020 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:00:36.238203  296020 machine.go:97] duration metric: took 4.005560458s to provisionDockerMachine
	I1210 07:00:36.238212  296020 start.go:293] postStartSetup for "no-preload-320236" (driver="docker")
	I1210 07:00:36.238223  296020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:00:36.238275  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:00:36.238329  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.254857  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.358982  296020 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:00:36.362431  296020 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:00:36.362463  296020 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:00:36.362476  296020 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 07:00:36.362532  296020 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 07:00:36.362616  296020 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 07:00:36.362730  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:00:36.370123  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:00:36.387715  296020 start.go:296] duration metric: took 149.487982ms for postStartSetup
	I1210 07:00:36.387809  296020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:00:36.387850  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.404695  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.508174  296020 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:00:36.512870  296020 fix.go:56] duration metric: took 4.664630876s for fixHost
	I1210 07:00:36.512896  296020 start.go:83] releasing machines lock for "no-preload-320236", held for 4.664678434s
	I1210 07:00:36.512987  296020 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-320236
	I1210 07:00:36.529627  296020 ssh_runner.go:195] Run: cat /version.json
	I1210 07:00:36.529680  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.529956  296020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:00:36.530021  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:36.556696  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.560591  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:36.658689  296020 ssh_runner.go:195] Run: systemctl --version
	I1210 07:00:36.753674  296020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:00:36.758001  296020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:00:36.758069  296020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:00:36.765538  296020 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:00:36.765576  296020 start.go:496] detecting cgroup driver to use...
	I1210 07:00:36.765607  296020 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:00:36.765653  296020 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:00:36.782605  296020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:00:36.796109  296020 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:00:36.796200  296020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:00:36.811318  296020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:00:36.824166  296020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:00:36.940162  296020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:00:37.067248  296020 docker.go:234] disabling docker service ...
	I1210 07:00:37.067375  296020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:00:37.082860  296020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:00:37.097077  296020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:00:37.210251  296020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:00:37.318500  296020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:00:37.331193  296020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:00:37.346030  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:37.491512  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:00:37.500237  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:00:37.508872  296020 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:00:37.508946  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:00:37.517510  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:00:37.526466  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:00:37.534915  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:00:37.543652  296020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:00:37.551699  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:00:37.560511  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:00:37.569071  296020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:00:37.577739  296020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:00:37.585320  296020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:00:37.592659  296020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:00:37.721273  296020 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:00:37.812117  296020 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:00:37.812183  296020 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:00:37.815932  296020 start.go:564] Will wait 60s for crictl version
	I1210 07:00:37.815991  296020 ssh_runner.go:195] Run: which crictl
	I1210 07:00:37.819381  296020 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:00:37.842923  296020 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:00:37.842993  296020 ssh_runner.go:195] Run: containerd --version
	I1210 07:00:37.862565  296020 ssh_runner.go:195] Run: containerd --version
	I1210 07:00:37.887310  296020 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 07:00:37.890224  296020 cli_runner.go:164] Run: docker network inspect no-preload-320236 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:00:37.905602  296020 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 07:00:37.909066  296020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:00:37.918252  296020 kubeadm.go:884] updating cluster {Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:00:37.918438  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:38.069274  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:38.216468  296020 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:00:38.360305  296020 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:00:38.360402  296020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:00:38.384995  296020 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:00:38.385019  296020 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:00:38.385028  296020 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 07:00:38.385169  296020 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320236 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:00:38.385237  296020 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:00:38.412034  296020 cni.go:84] Creating CNI manager for ""
	I1210 07:00:38.412063  296020 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:00:38.412085  296020 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:00:38.412108  296020 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320236 NodeName:no-preload-320236 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:00:38.412227  296020 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-320236"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:00:38.412299  296020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:00:38.421091  296020 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:00:38.421163  296020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:00:38.429922  296020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 07:00:38.443653  296020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:00:38.457014  296020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1210 07:00:38.471955  296020 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:00:38.475882  296020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:00:38.485504  296020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:00:38.595895  296020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:00:38.612585  296020 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236 for IP: 192.168.85.2
	I1210 07:00:38.612609  296020 certs.go:195] generating shared ca certs ...
	I1210 07:00:38.612627  296020 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:38.612815  296020 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 07:00:38.612878  296020 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 07:00:38.612890  296020 certs.go:257] generating profile certs ...
	I1210 07:00:38.612999  296020 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.key
	I1210 07:00:38.613070  296020 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key.2faa2447
	I1210 07:00:38.613137  296020 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key
	I1210 07:00:38.613277  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 07:00:38.613326  296020 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 07:00:38.613338  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:00:38.613368  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:00:38.613404  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:00:38.613433  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 07:00:38.613490  296020 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:00:38.614212  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:00:38.631972  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:00:38.649467  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:00:38.666377  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:00:38.686373  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:00:38.703781  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:00:38.723153  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:00:38.740812  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:00:38.758333  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 07:00:38.775839  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 07:00:38.793284  296020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:00:38.810326  296020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:00:38.822556  296020 ssh_runner.go:195] Run: openssl version
	I1210 07:00:38.829436  296020 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.836724  296020 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:00:38.844002  296020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.847779  296020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.847843  296020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:00:38.893925  296020 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:00:38.901463  296020 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.909031  296020 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 07:00:38.916756  296020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.920591  296020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.920655  296020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 07:00:38.962196  296020 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:00:38.969616  296020 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 07:00:38.976917  296020 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 07:00:38.984547  296020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 07:00:38.988142  296020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 07:00:38.988227  296020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 07:00:39.029601  296020 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:00:39.037081  296020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:00:39.040891  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:00:39.082809  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:00:39.123802  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:00:39.170233  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:00:39.211599  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:00:39.252658  296020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:00:39.293664  296020 kubeadm.go:401] StartCluster: {Name:no-preload-320236 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-320236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:00:39.293761  296020 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:00:39.293833  296020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:00:39.326465  296020 cri.go:89] found id: ""
	I1210 07:00:39.326535  296020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:00:39.334044  296020 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:00:39.334065  296020 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:00:39.334134  296020 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:00:39.341326  296020 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:00:39.341712  296020 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-320236" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:00:39.341813  296020 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-320236" cluster setting kubeconfig missing "no-preload-320236" context setting]
	I1210 07:00:39.342066  296020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:39.343566  296020 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:00:39.351071  296020 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 07:00:39.351101  296020 kubeadm.go:602] duration metric: took 17.030813ms to restartPrimaryControlPlane
	I1210 07:00:39.351110  296020 kubeadm.go:403] duration metric: took 57.455602ms to StartCluster
	I1210 07:00:39.351126  296020 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:39.351186  296020 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:00:39.351790  296020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:00:39.351984  296020 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:00:39.352290  296020 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:00:39.352337  296020 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:00:39.352428  296020 addons.go:70] Setting storage-provisioner=true in profile "no-preload-320236"
	I1210 07:00:39.352444  296020 addons.go:239] Setting addon storage-provisioner=true in "no-preload-320236"
	I1210 07:00:39.352451  296020 addons.go:70] Setting dashboard=true in profile "no-preload-320236"
	I1210 07:00:39.352465  296020 host.go:66] Checking if "no-preload-320236" exists ...
	I1210 07:00:39.352474  296020 addons.go:239] Setting addon dashboard=true in "no-preload-320236"
	W1210 07:00:39.352482  296020 addons.go:248] addon dashboard should already be in state true
	I1210 07:00:39.352506  296020 host.go:66] Checking if "no-preload-320236" exists ...
	I1210 07:00:39.352930  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.353043  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.353336  296020 addons.go:70] Setting default-storageclass=true in profile "no-preload-320236"
	I1210 07:00:39.353358  296020 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320236"
	I1210 07:00:39.353631  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.356443  296020 out.go:179] * Verifying Kubernetes components...
	I1210 07:00:39.359604  296020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:00:39.392662  296020 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:00:39.395653  296020 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:00:39.398571  296020 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:00:39.398592  296020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:00:39.398654  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:39.398779  296020 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:00:39.401749  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:00:39.401779  296020 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:00:39.401844  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:39.412459  296020 addons.go:239] Setting addon default-storageclass=true in "no-preload-320236"
	I1210 07:00:39.412502  296020 host.go:66] Checking if "no-preload-320236" exists ...
	I1210 07:00:39.412911  296020 cli_runner.go:164] Run: docker container inspect no-preload-320236 --format={{.State.Status}}
	I1210 07:00:39.451209  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:39.451232  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:39.471156  296020 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:00:39.471176  296020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:00:39.471241  296020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-320236
	I1210 07:00:39.496650  296020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/no-preload-320236/id_rsa Username:docker}
	I1210 07:00:39.601190  296020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:00:39.614141  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:00:39.645005  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:00:39.645028  296020 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:00:39.654222  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:00:39.665638  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:00:39.665659  296020 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:00:39.712904  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:00:39.712926  296020 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:00:39.726749  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:00:39.726772  296020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:00:39.740856  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:00:39.740877  296020 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:00:39.756673  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:00:39.756740  296020 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:00:39.769276  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:00:39.769343  296020 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:00:39.781575  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:00:39.781598  296020 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:00:39.794119  296020 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:00:39.794141  296020 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:00:39.806448  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:40.411601  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.411697  296020 retry.go:31] will retry after 364.307231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:40.411787  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.411824  296020 retry.go:31] will retry after 175.448245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:40.412081  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.412126  296020 retry.go:31] will retry after 340.80415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.412177  296020 node_ready.go:35] waiting up to 6m0s for node "no-preload-320236" to be "Ready" ...
	I1210 07:00:40.587992  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:40.644838  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.644918  296020 retry.go:31] will retry after 280.859873ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.754069  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:00:40.776546  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:40.828821  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.828916  296020 retry.go:31] will retry after 208.166646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:40.845124  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.845178  296020 retry.go:31] will retry after 309.037844ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.926770  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:40.985165  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:40.985193  296020 retry.go:31] will retry after 576.96991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.037550  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:41.099191  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.099230  296020 retry.go:31] will retry after 760.269809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.154571  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:41.223133  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.223166  296020 retry.go:31] will retry after 384.5048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.563176  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:00:41.607812  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:41.634200  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.634229  296020 retry.go:31] will retry after 958.895789ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:41.670372  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.670408  296020 retry.go:31] will retry after 1.242104692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.860733  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:41.944937  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:41.944981  296020 retry.go:31] will retry after 1.203859969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:42.412917  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:42.594314  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:42.653050  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:42.653087  296020 retry.go:31] will retry after 1.019515228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:42.912735  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:42.992543  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:42.992575  296020 retry.go:31] will retry after 1.525694084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.149942  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:43.215395  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.215430  296020 retry.go:31] will retry after 1.081952772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.673229  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:43.753817  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:43.753847  296020 retry.go:31] will retry after 2.453351659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:44.297966  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:44.359469  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:44.359502  296020 retry.go:31] will retry after 2.437831877s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:44.413141  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:44.518419  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:44.578484  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:44.578514  296020 retry.go:31] will retry after 2.525951728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:46.207448  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:46.269857  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:46.269893  296020 retry.go:31] will retry after 2.493371842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:46.413377  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:46.798249  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:46.865016  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:46.865052  296020 retry.go:31] will retry after 1.595518707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:47.104732  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:47.167159  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:47.167199  296020 retry.go:31] will retry after 2.421365807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.461029  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:48.523416  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.523451  296020 retry.go:31] will retry after 5.045916415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.763783  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:48.826893  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:48.826925  296020 retry.go:31] will retry after 2.901964551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:48.913552  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:49.589035  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:49.649801  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:49.649838  296020 retry.go:31] will retry after 4.385171631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:50.913785  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:51.729508  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:51.789192  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:51.789222  296020 retry.go:31] will retry after 4.971484132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:53.412679  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:53.570118  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:53.628103  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:53.628135  296020 retry.go:31] will retry after 4.154709683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:54.035994  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:54.099925  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:54.099959  296020 retry.go:31] will retry after 5.104591633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:55.413548  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:56.761591  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:00:56.827407  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:56.827438  296020 retry.go:31] will retry after 6.353816854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:57.783555  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:00:57.845429  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:57.845462  296020 retry.go:31] will retry after 8.667848959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:57.912770  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:00:59.205067  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:00:59.264096  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:00:59.264126  296020 retry.go:31] will retry after 10.603627722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:00:59.912812  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:01.913336  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:03.181966  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:03.241570  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:03.241608  296020 retry.go:31] will retry after 19.837023952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:04.412784  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:06.413759  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:06.515688  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:06.581717  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:06.581752  296020 retry.go:31] will retry after 20.713933736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:08.913557  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:09.868219  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:01:09.930350  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:09.930381  296020 retry.go:31] will retry after 16.670877723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:11.413714  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:13.913676  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:16.413698  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:18.913533  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:21.413576  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:23.079136  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:23.142459  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:23.142490  296020 retry.go:31] will retry after 12.673593141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:23.913225  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:25.913289  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:26.601791  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:01:26.668541  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:26.668575  296020 retry.go:31] will retry after 21.28734842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:27.295978  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:27.360758  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:27.360795  296020 retry.go:31] will retry after 15.710281845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:27.913387  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:29.913460  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:31.913645  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:33.913718  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:35.816320  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:35.874198  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:35.874230  296020 retry.go:31] will retry after 21.376325369s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:36.412713  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:38.412808  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:40.913670  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:42.913819  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:43.072120  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:43.135982  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:43.136017  296020 retry.go:31] will retry after 16.570147181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:45.412747  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:47.913625  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:47.956911  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:01:48.019680  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:48.019731  296020 retry.go:31] will retry after 28.501835741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:49.913722  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:52.412735  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:54.913738  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:57.251364  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:01:57.311036  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:01:57.311129  296020 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:01:57.412814  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:01:59.413910  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:01:59.706314  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:01:59.768631  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:01:59.768667  296020 retry.go:31] will retry after 38.033263553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:02:01.912954  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:03.913786  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:06.413305  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:08.413743  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:10.913528  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:12.913703  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:15.412647  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:02:16.522068  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:02:16.598309  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:02:16.598419  296020 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:02:17.412833  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:19.413691  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:21.912889  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:24.412677  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:26.412851  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:28.413641  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:30.913357  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:32.913501  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:34.913596  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:37.412726  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:02:37.802376  296020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:02:37.868725  296020 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:02:37.868813  296020 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:02:37.871969  296020 out.go:179] * Enabled addons: 
	I1210 07:02:37.875533  296020 addons.go:530] duration metric: took 1m58.523193068s for enable addons: enabled=[]
	W1210 07:02:39.412868  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:41.913602  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:43.913738  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:46.413639  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:48.913771  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:51.412644  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:53.413621  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:55.913703  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:02:57.913798  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:00.413728  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:02.912729  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:04.913512  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:06.913772  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:09.412627  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:11.412767  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:13.913613  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:15.913755  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:18.412757  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:20.413704  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:22.913503  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:24.913715  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:27.412717  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:29.412799  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:31.912720  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:33.913761  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:36.413520  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:38.413644  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:40.913728  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:43.412693  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:45.413603  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:47.413648  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:49.913012  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:52.413627  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:54.913602  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:57.413653  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:03:59.912671  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:01.913548  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:03.913688  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:06.413733  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:08.912736  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:10.913633  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:13.412730  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:15.413663  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:04:20.438913  288031 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001234655s
	I1210 07:04:20.438947  288031 kubeadm.go:319] 
	I1210 07:04:20.439199  288031 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:04:20.439384  288031 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:04:20.439577  288031 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:04:20.439588  288031 kubeadm.go:319] 
	I1210 07:04:20.439880  288031 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:04:20.439939  288031 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:04:20.439994  288031 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:04:20.440000  288031 kubeadm.go:319] 
	I1210 07:04:20.444885  288031 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:04:20.445319  288031 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:04:20.445433  288031 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:04:20.445673  288031 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:04:20.445684  288031 kubeadm.go:319] 
	I1210 07:04:20.445752  288031 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:04:20.445817  288031 kubeadm.go:403] duration metric: took 8m6.40123863s to StartCluster
	I1210 07:04:20.445855  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:04:20.445921  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:04:20.470269  288031 cri.go:89] found id: ""
	I1210 07:04:20.470308  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.470316  288031 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:04:20.470323  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:04:20.470390  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:04:20.495234  288031 cri.go:89] found id: ""
	I1210 07:04:20.495265  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.495274  288031 logs.go:284] No container was found matching "etcd"
	I1210 07:04:20.495280  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:04:20.495373  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:04:20.521061  288031 cri.go:89] found id: ""
	I1210 07:04:20.521084  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.521093  288031 logs.go:284] No container was found matching "coredns"
	I1210 07:04:20.521099  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:04:20.521177  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:04:20.545895  288031 cri.go:89] found id: ""
	I1210 07:04:20.545918  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.545927  288031 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:04:20.545934  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:04:20.545990  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:04:20.570266  288031 cri.go:89] found id: ""
	I1210 07:04:20.570288  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.570297  288031 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:04:20.570303  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:04:20.570392  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:04:20.594282  288031 cri.go:89] found id: ""
	I1210 07:04:20.594304  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.594312  288031 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:04:20.594319  288031 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:04:20.594383  288031 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:04:20.618464  288031 cri.go:89] found id: ""
	I1210 07:04:20.618493  288031 logs.go:282] 0 containers: []
	W1210 07:04:20.618501  288031 logs.go:284] No container was found matching "kindnet"
	I1210 07:04:20.618511  288031 logs.go:123] Gathering logs for containerd ...
	I1210 07:04:20.618538  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:04:20.660630  288031 logs.go:123] Gathering logs for container status ...
	I1210 07:04:20.660704  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:04:20.699139  288031 logs.go:123] Gathering logs for kubelet ...
	I1210 07:04:20.699162  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:04:20.761847  288031 logs.go:123] Gathering logs for dmesg ...
	I1210 07:04:20.761880  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:04:20.775451  288031 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:04:20.775481  288031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:04:20.841106  288031 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:04:20.833391    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.834229    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.835767    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.836254    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.837830    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:04:20.833391    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.834229    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.835767    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.836254    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:04:20.837830    5479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:04:20.841129  288031 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234655s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:04:20.841183  288031 out.go:285] * 
	W1210 07:04:20.841248  288031 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234655s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:04:20.841261  288031 out.go:285] * 
	W1210 07:04:20.843675  288031 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:04:20.850638  288031 out.go:203] 
	W1210 07:04:20.853450  288031 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234655s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:04:20.853494  288031 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:04:20.853520  288031 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:04:20.856600  288031 out.go:203] 
	W1210 07:04:17.912660  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:19.913717  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:22.412670  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:24.413374  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:26.413544  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:28.413623  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:30.913676  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:33.413655  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:35.413770  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:37.913476  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:40.413253  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:42.413440  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:44.913537  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:47.413752  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:49.913615  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:52.412854  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:54.413630  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:56.912733  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:04:58.913708  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:01.413338  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:03.413683  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:05.413757  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:07.912695  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:09.913684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:12.413665  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:14.913629  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:17.412756  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:19.413671  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:21.913774  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:24.413622  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:26.413733  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:28.913593  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:30.913649  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:33.413576  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:35.913684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:38.412716  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:40.913647  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:43.413725  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:45.414529  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:47.913666  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:50.412769  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:52.413724  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:54.913650  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 06:56:06 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:06.074293751Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:07 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:07.010217823Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 10 06:56:07 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:07.012578615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 10 06:56:07 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:07.021299576Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:07 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:07.022077659Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:08 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:08.096856637Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 10 06:56:08 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:08.100315793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 10 06:56:08 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:08.108662047Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:08 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:08.109287489Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:09 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:09.423910237Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 10 06:56:09 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:09.426532520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 10 06:56:09 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:09.435123683Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:09 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:09.435763278Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:10 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:10.431875098Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 10 06:56:10 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:10.434111934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 10 06:56:10 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:10.441828882Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:10 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:10.442369950Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.465834077Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.466813179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.471098820Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.472460982Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.802275357Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.803292990Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.806681852Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 10 06:56:11 newest-cni-168808 containerd[758]: time="2025-12-10T06:56:11.807174320Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:05:58.872450    6538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:05:58.873319    6538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:05:58.875337    6538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:05:58.876023    6538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:05:58.877641    6538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	
	
	==> kernel <==
	 07:05:58 up  1:48,  0 user,  load average: 0.50, 0.71, 1.41
	Linux newest-cni-168808 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:05:55 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:05:55 newest-cni-168808 kubelet[6419]: E1210 07:05:55.959791    6419 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:05:55 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:05:55 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:05:56 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 448.
	Dec 10 07:05:56 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:05:56 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:05:56 newest-cni-168808 kubelet[6424]: E1210 07:05:56.705659    6424 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:05:56 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:05:56 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:05:57 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 449.
	Dec 10 07:05:57 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:05:57 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:05:57 newest-cni-168808 kubelet[6430]: E1210 07:05:57.455913    6430 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:05:57 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:05:57 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:05:58 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 450.
	Dec 10 07:05:58 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:05:58 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:05:58 newest-cni-168808 kubelet[6454]: E1210 07:05:58.185673    6454 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:05:58 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:05:58 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:05:58 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 451.
	Dec 10 07:05:58 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:05:58 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808: exit status 6 (364.016013ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:05:59.437039  303133 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-168808" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-168808" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (96.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (375.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1210 07:06:38.876215    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:06:40.090278    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 105 (6m10.377207833s)

                                                
                                                
-- stdout --
	* [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:06:00.999721  303437 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:06:00.999928  303437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:06:00.999941  303437 out.go:374] Setting ErrFile to fd 2...
	I1210 07:06:00.999948  303437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:06:01.000291  303437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 07:06:01.000840  303437 out.go:368] Setting JSON to false
	I1210 07:06:01.001958  303437 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6511,"bootTime":1765343850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:06:01.002049  303437 start.go:143] virtualization:  
	I1210 07:06:01.005229  303437 out.go:179] * [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:06:01.009127  303437 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:06:01.009191  303437 notify.go:221] Checking for updates...
	I1210 07:06:01.015115  303437 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:06:01.018047  303437 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:01.021396  303437 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 07:06:01.024347  303437 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:06:01.027298  303437 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:06:01.030670  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:01.031359  303437 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:06:01.059280  303437 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:06:01.059409  303437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:06:01.117784  303437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:06:01.1083965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:06:01.117913  303437 docker.go:319] overlay module found
	I1210 07:06:01.121244  303437 out.go:179] * Using the docker driver based on existing profile
	I1210 07:06:01.124129  303437 start.go:309] selected driver: docker
	I1210 07:06:01.124152  303437 start.go:927] validating driver "docker" against &{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:01.124257  303437 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:06:01.124971  303437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:06:01.177684  303437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:06:01.168448125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:06:01.178039  303437 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:06:01.178072  303437 cni.go:84] Creating CNI manager for ""
	I1210 07:06:01.178124  303437 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:06:01.178165  303437 start.go:353] cluster config:
	{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:01.183109  303437 out.go:179] * Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	I1210 07:06:01.185906  303437 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:06:01.188882  303437 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:06:01.191653  303437 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:06:01.191725  303437 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:06:01.211624  303437 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:06:01.211647  303437 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:06:01.245655  303437 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 07:06:01.410333  303437 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 07:06:01.410482  303437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 07:06:01.410710  303437 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:06:01.410741  303437 start.go:360] acquireMachinesLock for newest-cni-168808: {Name:mk3e9e7ddd89f37944cb01c45725514e76e5ba82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:01.410794  303437 start.go:364] duration metric: took 32.001µs to acquireMachinesLock for "newest-cni-168808"
	I1210 07:06:01.410811  303437 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:06:01.410817  303437 fix.go:54] fixHost starting: 
	I1210 07:06:01.411108  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:01.411381  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.445269  303437 fix.go:112] recreateIfNeeded on newest-cni-168808: state=Stopped err=<nil>
	W1210 07:06:01.445299  303437 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:06:01.448589  303437 out.go:252] * Restarting existing docker container for "newest-cni-168808" ...
	I1210 07:06:01.448678  303437 cli_runner.go:164] Run: docker start newest-cni-168808
	I1210 07:06:01.609744  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.770299  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:01.790186  303437 kic.go:430] container "newest-cni-168808" state is running.
	I1210 07:06:01.790574  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:01.816467  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.816783  303437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 07:06:01.816990  303437 machine.go:94] provisionDockerMachine start ...
	I1210 07:06:01.817053  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:01.864829  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:01.865171  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:01.865181  303437 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:06:01.865918  303437 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:06:02.031349  303437 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031449  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:06:02.031458  303437 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.682µs
	I1210 07:06:02.031466  303437 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:06:02.031488  303437 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031520  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:06:02.031525  303437 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 49.765µs
	I1210 07:06:02.031536  303437 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031546  303437 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031572  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:06:02.031577  303437 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 32µs
	I1210 07:06:02.031583  303437 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031592  303437 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031616  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:06:02.031621  303437 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 30.351µs
	I1210 07:06:02.031626  303437 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031635  303437 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031658  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 07:06:02.031663  303437 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 29.047µs
	I1210 07:06:02.031668  303437 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031676  303437 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031702  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:06:02.031711  303437 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.042µs
	I1210 07:06:02.031716  303437 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:06:02.031725  303437 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031752  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 07:06:02.031757  303437 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 32.509µs
	I1210 07:06:02.031762  303437 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 07:06:02.031770  303437 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031794  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:06:02.031799  303437 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.973µs
	I1210 07:06:02.031809  303437 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:06:02.031817  303437 cache.go:87] Successfully saved all images to host disk.
	I1210 07:06:05.019038  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 07:06:05.019065  303437 ubuntu.go:182] provisioning hostname "newest-cni-168808"
	I1210 07:06:05.019142  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.038167  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:05.038497  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:05.038514  303437 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-168808 && echo "newest-cni-168808" | sudo tee /etc/hostname
	I1210 07:06:05.212495  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 07:06:05.212574  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.236676  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:05.236997  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:05.237020  303437 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-168808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-168808/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-168808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:06:05.387591  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:06:05.387661  303437 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 07:06:05.387701  303437 ubuntu.go:190] setting up certificates
	I1210 07:06:05.387718  303437 provision.go:84] configureAuth start
	I1210 07:06:05.387781  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:05.406720  303437 provision.go:143] copyHostCerts
	I1210 07:06:05.406812  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 07:06:05.406827  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 07:06:05.406903  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 07:06:05.407068  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 07:06:05.407080  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 07:06:05.407115  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 07:06:05.409257  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 07:06:05.409288  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 07:06:05.409367  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 07:06:05.409470  303437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-168808 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-168808]
	I1210 07:06:05.457283  303437 provision.go:177] copyRemoteCerts
	I1210 07:06:05.457369  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:06:05.457416  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.474754  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.578879  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:06:05.596686  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:06:05.614316  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:06:05.632529  303437 provision.go:87] duration metric: took 244.787433ms to configureAuth
	I1210 07:06:05.632557  303437 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:06:05.632770  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:05.632780  303437 machine.go:97] duration metric: took 3.815782677s to provisionDockerMachine
	I1210 07:06:05.632794  303437 start.go:293] postStartSetup for "newest-cni-168808" (driver="docker")
	I1210 07:06:05.632814  303437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:06:05.632866  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:06:05.632909  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.651511  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.755084  303437 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:06:05.758541  303437 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:06:05.758569  303437 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:06:05.758581  303437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 07:06:05.758636  303437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 07:06:05.758716  303437 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 07:06:05.758818  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:06:05.766638  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:06:05.784153  303437 start.go:296] duration metric: took 151.337167ms for postStartSetup
	I1210 07:06:05.784245  303437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:06:05.784296  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.801680  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.903956  303437 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:06:05.910414  303437 fix.go:56] duration metric: took 4.499590898s for fixHost
	I1210 07:06:05.910487  303437 start.go:83] releasing machines lock for "newest-cni-168808", held for 4.499684126s
	I1210 07:06:05.910597  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:05.931294  303437 ssh_runner.go:195] Run: cat /version.json
	I1210 07:06:05.931352  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.933029  303437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:06:05.933104  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.966773  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.968660  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:06.164421  303437 ssh_runner.go:195] Run: systemctl --version
	I1210 07:06:06.170684  303437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:06:06.174920  303437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:06:06.174984  303437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:06:06.182557  303437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:06:06.182578  303437 start.go:496] detecting cgroup driver to use...
	I1210 07:06:06.182611  303437 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:06:06.182660  303437 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:06:06.200334  303437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:06:06.213740  303437 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:06:06.213811  303437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:06:06.229308  303437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:06:06.242262  303437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:06:06.362603  303437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:06:06.483045  303437 docker.go:234] disabling docker service ...
	I1210 07:06:06.483112  303437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:06:06.498250  303437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:06:06.511747  303437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:06:06.628460  303437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:06:06.766872  303437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:06:06.779978  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:06:06.794352  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:06.943808  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:06:06.954116  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:06:06.962677  303437 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:06:06.962740  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:06:06.971255  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:06:06.980030  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:06:06.988476  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:06:07.007850  303437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:06:07.016475  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:06:07.025456  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:06:07.034855  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:06:07.044266  303437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:06:07.052503  303437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:06:07.060278  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:07.175410  303437 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:06:07.276715  303437 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:06:07.276786  303437 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:06:07.280624  303437 start.go:564] Will wait 60s for crictl version
	I1210 07:06:07.280687  303437 ssh_runner.go:195] Run: which crictl
	I1210 07:06:07.284270  303437 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:06:07.312279  303437 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:06:07.312345  303437 ssh_runner.go:195] Run: containerd --version
	I1210 07:06:07.332603  303437 ssh_runner.go:195] Run: containerd --version
	I1210 07:06:07.358017  303437 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 07:06:07.360940  303437 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:06:07.377362  303437 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:06:07.381128  303437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:06:07.393654  303437 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:06:07.396326  303437 kubeadm.go:884] updating cluster {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:06:07.396576  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.559787  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.709730  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.859001  303437 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:06:07.859128  303437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:06:07.883821  303437 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:06:07.883846  303437 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:06:07.883855  303437 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 07:06:07.883958  303437 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-168808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:06:07.884031  303437 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:06:07.913929  303437 cni.go:84] Creating CNI manager for ""
	I1210 07:06:07.913952  303437 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:06:07.913973  303437 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:06:07.913999  303437 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-168808 NodeName:newest-cni-168808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:06:07.914120  303437 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-168808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:06:07.914189  303437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:06:07.921856  303437 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:06:07.921924  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:06:07.929166  303437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 07:06:07.941324  303437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:06:07.954047  303437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1210 07:06:07.966208  303437 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:06:07.969747  303437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:06:07.979238  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:08.094271  303437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:06:08.111901  303437 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808 for IP: 192.168.76.2
	I1210 07:06:08.111935  303437 certs.go:195] generating shared ca certs ...
	I1210 07:06:08.111952  303437 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.112156  303437 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 07:06:08.112239  303437 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 07:06:08.112261  303437 certs.go:257] generating profile certs ...
	I1210 07:06:08.112411  303437 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key
	I1210 07:06:08.112508  303437 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb
	I1210 07:06:08.112594  303437 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key
	I1210 07:06:08.112776  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 07:06:08.112825  303437 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 07:06:08.112863  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:06:08.112899  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:06:08.112950  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:06:08.112979  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 07:06:08.113053  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:06:08.113737  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:06:08.131868  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:06:08.149347  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:06:08.173211  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:06:08.201112  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:06:08.217931  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:06:08.234927  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:06:08.255525  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:06:08.274117  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 07:06:08.291924  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:06:08.309223  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 07:06:08.326082  303437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:06:08.338602  303437 ssh_runner.go:195] Run: openssl version
	I1210 07:06:08.345277  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.353152  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:06:08.360717  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.364534  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.364612  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.406623  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:06:08.414672  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.422361  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 07:06:08.430022  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.433878  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.433973  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.475572  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:06:08.483285  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.491000  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 07:06:08.498512  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.502241  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.502306  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.543558  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:06:08.551469  303437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:06:08.555461  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:06:08.597134  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:06:08.638002  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:06:08.678965  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:06:08.720427  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:06:08.763492  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:06:08.809518  303437 kubeadm.go:401] StartCluster: {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:08.809633  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:06:08.809696  303437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:06:08.836487  303437 cri.go:89] found id: ""
	I1210 07:06:08.836609  303437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:06:08.844505  303437 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:06:08.844525  303437 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:06:08.844604  303437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:06:08.852026  303437 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:06:08.852667  303437 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-168808" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:08.852944  303437 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-168808" cluster setting kubeconfig missing "newest-cni-168808" context setting]
	I1210 07:06:08.853395  303437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.854743  303437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:06:08.863687  303437 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 07:06:08.863719  303437 kubeadm.go:602] duration metric: took 19.187765ms to restartPrimaryControlPlane
	I1210 07:06:08.863729  303437 kubeadm.go:403] duration metric: took 54.219605ms to StartCluster
	I1210 07:06:08.863764  303437 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.863854  303437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:08.864943  303437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.865201  303437 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:06:08.865553  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:08.865626  303437 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:06:08.865710  303437 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-168808"
	I1210 07:06:08.865725  303437 addons.go:70] Setting dashboard=true in profile "newest-cni-168808"
	I1210 07:06:08.865738  303437 addons.go:70] Setting default-storageclass=true in profile "newest-cni-168808"
	I1210 07:06:08.865748  303437 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-168808"
	I1210 07:06:08.865755  303437 addons.go:239] Setting addon dashboard=true in "newest-cni-168808"
	W1210 07:06:08.865763  303437 addons.go:248] addon dashboard should already be in state true
	I1210 07:06:08.865787  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.866234  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.865732  303437 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-168808"
	I1210 07:06:08.866264  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.866892  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.866245  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.870618  303437 out.go:179] * Verifying Kubernetes components...
	I1210 07:06:08.877218  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:08.909365  303437 addons.go:239] Setting addon default-storageclass=true in "newest-cni-168808"
	I1210 07:06:08.909422  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.909955  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.935168  303437 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:06:08.938081  303437 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:06:08.938245  303437 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:06:08.941690  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:06:08.941720  303437 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:06:08.941756  303437 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:08.941772  303437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:06:08.941809  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:08.941835  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:08.974920  303437 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:08.974945  303437 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:06:08.975007  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:09.018425  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.019111  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.028670  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.182128  303437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:06:09.189848  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:09.218621  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:06:09.218696  303437 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:06:09.233237  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:09.248580  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:06:09.248655  303437 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:06:09.280152  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:06:09.280225  303437 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:06:09.294171  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:06:09.294239  303437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:06:09.308986  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:06:09.309057  303437 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:06:09.323118  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:06:09.323195  303437 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:06:09.337212  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:06:09.337284  303437 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:06:09.351939  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:06:09.352006  303437 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:06:09.364684  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:09.364749  303437 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:06:09.377472  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:09.912036  303437 api_server.go:52] waiting for apiserver process to appear ...
	W1210 07:06:09.912102  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912165  303437 retry.go:31] will retry after 137.554553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:09.912180  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912239  303437 retry.go:31] will retry after 162.08127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912111  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:09.912371  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912391  303437 retry.go:31] will retry after 156.096194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.049986  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:10.068682  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:10.075250  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:10.139495  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.139526  303437 retry.go:31] will retry after 525.238587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.196161  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.196246  303437 retry.go:31] will retry after 422.355289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.196206  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.196316  303437 retry.go:31] will retry after 388.387448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.412254  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:10.585608  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:10.619095  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:10.648889  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.648984  303437 retry.go:31] will retry after 452.281973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.665111  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:10.718838  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.718922  303437 retry.go:31] will retry after 323.626302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.751170  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.751201  303437 retry.go:31] will retry after 426.205037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.912296  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:11.043189  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:11.101706  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:11.108011  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.108097  303437 retry.go:31] will retry after 465.500211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:11.171627  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.171733  303437 retry.go:31] will retry after 644.635053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.177835  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:11.248736  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.248773  303437 retry.go:31] will retry after 646.277835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.413044  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:11.574386  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:11.635719  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.635755  303437 retry.go:31] will retry after 992.827501ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.816838  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:11.874310  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.874341  303437 retry.go:31] will retry after 847.092889ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.895446  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:11.912890  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:11.979233  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.979274  303437 retry.go:31] will retry after 1.723803171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.412929  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:12.629708  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:12.711328  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.711402  303437 retry.go:31] will retry after 1.682909305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.721580  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:12.787715  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.787755  303437 retry.go:31] will retry after 1.523563907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.912980  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:13.412270  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:13.704137  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:13.769291  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:13.769319  303437 retry.go:31] will retry after 2.655752177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:13.912604  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:14.312036  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:14.379977  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.380010  303437 retry.go:31] will retry after 2.120509482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.395420  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:14.412979  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:14.494970  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.495005  303437 retry.go:31] will retry after 2.083776468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.913027  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:15.412429  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:15.912376  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:16.412255  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:16.425325  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:16.500296  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.500325  303437 retry.go:31] will retry after 1.753545178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.501400  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:16.562473  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.562506  303437 retry.go:31] will retry after 5.63085781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.579894  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:16.640721  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.640756  303437 retry.go:31] will retry after 2.710169887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.912245  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:17.412350  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:17.913142  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:18.254741  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:18.317147  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:18.317176  303437 retry.go:31] will retry after 6.057763532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:18.412287  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:18.912752  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:19.352062  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:19.412870  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:19.413382  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:19.413410  303437 retry.go:31] will retry after 6.763226999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:19.913016  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:20.412997  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:20.913098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:21.412278  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:21.913122  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:22.194391  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:22.251091  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:22.251123  303437 retry.go:31] will retry after 9.11395006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:22.412163  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:22.912351  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:23.412284  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:23.913156  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:24.375236  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:24.412827  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:24.440293  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:24.440322  303437 retry.go:31] will retry after 9.4401753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:24.912889  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:25.412233  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:25.912307  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:26.177306  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:26.250932  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:26.250965  303437 retry.go:31] will retry after 5.997165797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:26.412268  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:26.912230  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:27.412900  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:27.912402  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:28.412186  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:28.912521  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:29.412227  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:29.912255  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:30.413237  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:30.912254  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:31.366162  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:31.412559  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:31.439835  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:31.439865  303437 retry.go:31] will retry after 9.181638872s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:31.912411  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:32.248486  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:32.313416  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:32.313450  303437 retry.go:31] will retry after 9.93876945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:32.412880  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:32.912746  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:33.412590  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:33.880694  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:33.912312  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:33.964338  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:33.964372  303437 retry.go:31] will retry after 6.698338092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:34.413098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:34.912991  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:35.413188  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:35.912404  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:36.412320  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:36.912280  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:37.412192  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:37.912490  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:38.412402  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:38.912902  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:39.412781  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:39.912868  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:40.413057  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:40.621960  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:40.663144  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:40.779058  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.779095  303437 retry.go:31] will retry after 16.870406936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:40.830377  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.830410  303437 retry.go:31] will retry after 13.844749205s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.912652  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:41.412296  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:41.912802  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:42.252520  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:42.323589  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:42.323630  303437 retry.go:31] will retry after 27.422515535s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:42.412805  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:42.912953  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:43.412903  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:43.912754  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:44.412272  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:44.912265  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:45.412790  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:45.912791  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:46.413202  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:46.912321  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:47.412292  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:47.912507  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:48.412885  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:48.912342  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:49.413070  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:49.912837  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:50.412236  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:50.912907  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:51.412287  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:51.913181  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:52.412208  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:52.912275  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:53.412923  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:53.912230  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:54.412280  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:54.676234  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:54.749679  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:54.749717  303437 retry.go:31] will retry after 32.358913109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:54.913072  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:55.412886  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:55.913073  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:56.412961  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:56.912198  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:57.412942  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:57.649751  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:57.723910  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:57.723937  303437 retry.go:31] will retry after 19.76255611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:57.912185  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:58.412253  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:58.912817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:59.412285  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:59.912592  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:00.412249  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:00.912270  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:01.412382  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:01.912282  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:02.412190  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:02.912865  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:03.412818  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:03.912286  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:04.412820  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:04.913148  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:05.412411  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:05.912250  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:06.412297  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:06.913174  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:07.412239  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:07.912324  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:08.412210  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:08.912197  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:08.912278  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:08.940273  303437 cri.go:89] found id: ""
	I1210 07:07:08.940300  303437 logs.go:282] 0 containers: []
	W1210 07:07:08.940309  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:08.940316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:08.940374  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:08.976821  303437 cri.go:89] found id: ""
	I1210 07:07:08.976848  303437 logs.go:282] 0 containers: []
	W1210 07:07:08.976857  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:08.976863  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:08.976928  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:09.004516  303437 cri.go:89] found id: ""
	I1210 07:07:09.004546  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.004555  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:09.004561  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:09.004633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:09.029569  303437 cri.go:89] found id: ""
	I1210 07:07:09.029593  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.029602  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:09.029609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:09.029666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:09.055232  303437 cri.go:89] found id: ""
	I1210 07:07:09.055256  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.055265  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:09.055281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:09.055342  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:09.080957  303437 cri.go:89] found id: ""
	I1210 07:07:09.080978  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.080986  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:09.080992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:09.081051  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:09.105491  303437 cri.go:89] found id: ""
	I1210 07:07:09.105561  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.105583  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:09.105603  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:09.105682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:09.129839  303437 cri.go:89] found id: ""
	I1210 07:07:09.129861  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.129870  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:09.129879  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:09.129890  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:09.157418  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:09.157444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:09.218619  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:09.218655  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:09.233569  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:09.233598  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:09.299933  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:09.291172    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.291836    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.293591    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.295225    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.296375    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:09.291172    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.291836    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.293591    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.295225    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.296375    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:09.299954  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:09.299968  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:09.746365  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:07:09.810849  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:09.810882  303437 retry.go:31] will retry after 38.106772232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:11.825038  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:11.835407  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:11.835491  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:11.859384  303437 cri.go:89] found id: ""
	I1210 07:07:11.859407  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.859416  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:11.859422  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:11.859482  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:11.883645  303437 cri.go:89] found id: ""
	I1210 07:07:11.883667  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.883677  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:11.883683  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:11.883746  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:11.912907  303437 cri.go:89] found id: ""
	I1210 07:07:11.912987  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.913010  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:11.913029  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:11.913135  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:11.954332  303437 cri.go:89] found id: ""
	I1210 07:07:11.954354  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.954363  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:11.954369  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:11.954447  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:11.987932  303437 cri.go:89] found id: ""
	I1210 07:07:11.988008  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.988024  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:11.988048  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:11.988134  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:12.016019  303437 cri.go:89] found id: ""
	I1210 07:07:12.016043  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.016052  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:12.016059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:12.016161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:12.041574  303437 cri.go:89] found id: ""
	I1210 07:07:12.041616  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.041625  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:12.041633  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:12.041702  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:12.067242  303437 cri.go:89] found id: ""
	I1210 07:07:12.067309  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.067335  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:12.067351  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:12.067368  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:12.080423  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:12.080492  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:12.142902  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:12.135099    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.135777    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.137430    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.138066    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.139703    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:12.135099    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.135777    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.137430    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.138066    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.139703    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:12.142926  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:12.142940  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:12.170013  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:12.170095  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:12.205843  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:12.205871  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:14.769151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:14.779543  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:14.779628  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:14.804854  303437 cri.go:89] found id: ""
	I1210 07:07:14.804877  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.804885  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:14.804892  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:14.804951  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:14.829499  303437 cri.go:89] found id: ""
	I1210 07:07:14.829521  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.829529  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:14.829535  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:14.829592  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:14.857960  303437 cri.go:89] found id: ""
	I1210 07:07:14.857984  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.857993  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:14.858000  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:14.858058  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:14.882942  303437 cri.go:89] found id: ""
	I1210 07:07:14.882964  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.882972  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:14.882978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:14.883074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:14.906556  303437 cri.go:89] found id: ""
	I1210 07:07:14.906582  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.906591  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:14.906598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:14.906653  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:14.944744  303437 cri.go:89] found id: ""
	I1210 07:07:14.944771  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.944780  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:14.944796  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:14.944859  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:14.974225  303437 cri.go:89] found id: ""
	I1210 07:07:14.974248  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.974256  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:14.974263  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:14.974323  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:15.005431  303437 cri.go:89] found id: ""
	I1210 07:07:15.005515  303437 logs.go:282] 0 containers: []
	W1210 07:07:15.005539  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:15.005564  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:15.005607  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:15.075329  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:15.067284    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.067872    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069470    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069869    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.071556    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:15.067284    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.067872    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069470    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069869    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.071556    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:15.075363  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:15.075376  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:15.100635  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:15.100670  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:15.129987  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:15.130013  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:15.198219  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:15.198300  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:17.487235  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:07:17.543553  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:17.543587  303437 retry.go:31] will retry after 31.69876155s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:17.712834  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:17.723193  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:17.723262  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:17.747430  303437 cri.go:89] found id: ""
	I1210 07:07:17.747453  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.747462  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:17.747468  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:17.747525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:17.771960  303437 cri.go:89] found id: ""
	I1210 07:07:17.771982  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.771990  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:17.771996  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:17.772060  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:17.796155  303437 cri.go:89] found id: ""
	I1210 07:07:17.796176  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.796184  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:17.796190  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:17.796251  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:17.825359  303437 cri.go:89] found id: ""
	I1210 07:07:17.825385  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.825394  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:17.825401  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:17.825462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:17.853147  303437 cri.go:89] found id: ""
	I1210 07:07:17.853170  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.853178  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:17.853184  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:17.853243  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:17.878806  303437 cri.go:89] found id: ""
	I1210 07:07:17.878830  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.878839  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:17.878846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:17.878905  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:17.902975  303437 cri.go:89] found id: ""
	I1210 07:07:17.902999  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.903007  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:17.903037  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:17.903112  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:17.934568  303437 cri.go:89] found id: ""
	I1210 07:07:17.934592  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.934600  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:17.934610  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:17.934621  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:17.999695  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:17.999740  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:18.029219  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:18.029256  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:18.094199  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:18.085818    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.086373    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.088284    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.089006    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.090767    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:18.085818    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.086373    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.088284    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.089006    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.090767    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:18.094223  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:18.094238  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:18.120245  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:18.120283  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:20.649514  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:20.661165  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:20.661236  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:20.686549  303437 cri.go:89] found id: ""
	I1210 07:07:20.686572  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.686581  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:20.686587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:20.686654  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:20.711873  303437 cri.go:89] found id: ""
	I1210 07:07:20.711895  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.711903  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:20.711910  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:20.711968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:20.736261  303437 cri.go:89] found id: ""
	I1210 07:07:20.736283  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.736292  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:20.736298  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:20.736360  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:20.765759  303437 cri.go:89] found id: ""
	I1210 07:07:20.765781  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.765797  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:20.765804  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:20.765862  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:20.793639  303437 cri.go:89] found id: ""
	I1210 07:07:20.793661  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.793669  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:20.793675  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:20.793751  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:20.818318  303437 cri.go:89] found id: ""
	I1210 07:07:20.818339  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.818347  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:20.818354  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:20.818417  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:20.843499  303437 cri.go:89] found id: ""
	I1210 07:07:20.843523  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.843533  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:20.843539  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:20.843598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:20.868745  303437 cri.go:89] found id: ""
	I1210 07:07:20.868768  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.868776  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:20.868785  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:20.868796  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:20.897905  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:20.897981  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:20.962576  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:20.962654  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:20.977746  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:20.977835  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:21.045052  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:21.037239    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.037840    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.039622    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.040051    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.041574    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:21.037239    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.037840    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.039622    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.040051    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.041574    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:21.045073  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:21.045085  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:23.570777  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:23.580946  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:23.581021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:23.605355  303437 cri.go:89] found id: ""
	I1210 07:07:23.605379  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.605388  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:23.605394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:23.605451  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:23.632675  303437 cri.go:89] found id: ""
	I1210 07:07:23.632697  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.632706  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:23.632713  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:23.632783  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:23.656579  303437 cri.go:89] found id: ""
	I1210 07:07:23.656602  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.656610  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:23.656617  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:23.656675  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:23.684796  303437 cri.go:89] found id: ""
	I1210 07:07:23.684816  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.684825  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:23.684832  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:23.684893  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:23.709043  303437 cri.go:89] found id: ""
	I1210 07:07:23.709064  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.709073  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:23.709079  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:23.709149  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:23.733315  303437 cri.go:89] found id: ""
	I1210 07:07:23.733340  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.733348  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:23.733355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:23.733413  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:23.761492  303437 cri.go:89] found id: ""
	I1210 07:07:23.761514  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.761524  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:23.761530  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:23.761586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:23.786489  303437 cri.go:89] found id: ""
	I1210 07:07:23.786511  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.786520  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:23.786530  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:23.786540  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:23.812193  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:23.812231  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:23.842956  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:23.842990  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:23.898018  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:23.898052  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:23.912477  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:23.912507  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:23.996757  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:23.988502    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.989251    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.990734    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.991360    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.993170    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:23.988502    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.989251    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.990734    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.991360    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.993170    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:26.497835  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:26.508472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:26.508547  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:26.533241  303437 cri.go:89] found id: ""
	I1210 07:07:26.533264  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.533272  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:26.533279  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:26.533337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:26.558844  303437 cri.go:89] found id: ""
	I1210 07:07:26.558868  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.558877  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:26.558883  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:26.558941  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:26.584008  303437 cri.go:89] found id: ""
	I1210 07:07:26.584042  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.584051  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:26.584058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:26.584176  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:26.609123  303437 cri.go:89] found id: ""
	I1210 07:07:26.609145  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.609153  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:26.609160  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:26.609220  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:26.633105  303437 cri.go:89] found id: ""
	I1210 07:07:26.633127  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.633136  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:26.633142  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:26.633220  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:26.662834  303437 cri.go:89] found id: ""
	I1210 07:07:26.662858  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.662875  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:26.662897  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:26.662989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:26.688296  303437 cri.go:89] found id: ""
	I1210 07:07:26.688318  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.688326  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:26.688332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:26.688401  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:26.714475  303437 cri.go:89] found id: ""
	I1210 07:07:26.714545  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.714564  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:26.714595  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:26.714609  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:26.769794  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:26.769827  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:26.782871  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:26.782909  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:26.843846  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:26.836227    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.836952    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838581    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838893    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.840461    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:26.836227    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.836952    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838581    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838893    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.840461    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:26.843867  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:26.843881  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:26.869319  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:26.869353  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:27.109532  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:07:27.174544  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:27.174590  303437 retry.go:31] will retry after 31.997742819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:29.396194  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:29.406428  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:29.406498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:29.433424  303437 cri.go:89] found id: ""
	I1210 07:07:29.433455  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.433465  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:29.433471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:29.433536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:29.463589  303437 cri.go:89] found id: ""
	I1210 07:07:29.463615  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.463624  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:29.463630  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:29.463686  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:29.492343  303437 cri.go:89] found id: ""
	I1210 07:07:29.492365  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.492374  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:29.492380  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:29.492437  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:29.516069  303437 cri.go:89] found id: ""
	I1210 07:07:29.516097  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.516106  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:29.516113  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:29.516171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:29.539661  303437 cri.go:89] found id: ""
	I1210 07:07:29.539693  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.539703  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:29.539712  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:29.539781  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:29.563791  303437 cri.go:89] found id: ""
	I1210 07:07:29.563814  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.563823  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:29.563829  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:29.563887  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:29.589136  303437 cri.go:89] found id: ""
	I1210 07:07:29.589160  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.589168  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:29.589175  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:29.589233  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:29.614701  303437 cri.go:89] found id: ""
	I1210 07:07:29.614724  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.614734  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:29.614743  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:29.614756  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:29.670207  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:29.670240  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:29.683977  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:29.684005  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:29.748039  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:29.739856    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.740576    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742311    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742892    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.744661    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:29.739856    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.740576    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742311    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742892    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.744661    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:29.748061  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:29.748077  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:29.772992  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:29.773024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:32.300508  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:32.310795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:32.310865  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:32.334361  303437 cri.go:89] found id: ""
	I1210 07:07:32.334387  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.334396  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:32.334403  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:32.334478  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:32.361534  303437 cri.go:89] found id: ""
	I1210 07:07:32.361627  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.361651  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:32.361681  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:32.361764  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:32.386488  303437 cri.go:89] found id: ""
	I1210 07:07:32.386513  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.386521  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:32.386528  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:32.386588  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:32.415239  303437 cri.go:89] found id: ""
	I1210 07:07:32.415265  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.415274  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:32.415280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:32.415340  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:32.443074  303437 cri.go:89] found id: ""
	I1210 07:07:32.443097  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.443105  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:32.443111  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:32.443170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:32.477593  303437 cri.go:89] found id: ""
	I1210 07:07:32.477620  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.477629  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:32.477636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:32.477693  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:32.502550  303437 cri.go:89] found id: ""
	I1210 07:07:32.502575  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.502584  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:32.502590  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:32.502666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:32.527562  303437 cri.go:89] found id: ""
	I1210 07:07:32.527585  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.527606  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:32.527616  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:32.527632  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:32.588732  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:32.581238    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.581723    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583416    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583864    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.585321    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:32.581238    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.581723    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583416    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583864    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.585321    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:32.588755  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:32.588767  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:32.614322  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:32.614354  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:32.642747  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:32.642777  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:32.697541  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:32.697576  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:35.211281  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:35.221258  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:35.221336  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:35.253168  303437 cri.go:89] found id: ""
	I1210 07:07:35.253193  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.253203  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:35.253210  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:35.253268  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:35.281234  303437 cri.go:89] found id: ""
	I1210 07:07:35.281257  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.281267  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:35.281273  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:35.281333  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:35.310530  303437 cri.go:89] found id: ""
	I1210 07:07:35.310554  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.310563  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:35.310570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:35.310627  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:35.334764  303437 cri.go:89] found id: ""
	I1210 07:07:35.334792  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.334801  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:35.334813  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:35.334870  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:35.361502  303437 cri.go:89] found id: ""
	I1210 07:07:35.361525  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.361534  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:35.361540  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:35.361607  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:35.389058  303437 cri.go:89] found id: ""
	I1210 07:07:35.389080  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.389089  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:35.389095  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:35.389154  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:35.425176  303437 cri.go:89] found id: ""
	I1210 07:07:35.425215  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.425226  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:35.425232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:35.425299  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:35.453052  303437 cri.go:89] found id: ""
	I1210 07:07:35.453079  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.453088  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:35.453097  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:35.453108  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:35.522148  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:35.513319    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.513889    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.516065    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517374    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517853    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:35.513319    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.513889    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.516065    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517374    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517853    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:35.522174  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:35.522186  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:35.547665  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:35.547698  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:35.575564  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:35.575596  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:35.634362  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:35.634400  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:38.149569  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:38.160486  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:38.160568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:38.201222  303437 cri.go:89] found id: ""
	I1210 07:07:38.201245  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.201253  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:38.201260  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:38.201317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:38.237151  303437 cri.go:89] found id: ""
	I1210 07:07:38.237174  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.237183  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:38.237189  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:38.237259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:38.262732  303437 cri.go:89] found id: ""
	I1210 07:07:38.262760  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.262770  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:38.262777  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:38.262835  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:38.293247  303437 cri.go:89] found id: ""
	I1210 07:07:38.293273  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.293283  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:38.293290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:38.293351  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:38.317818  303437 cri.go:89] found id: ""
	I1210 07:07:38.317840  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.317849  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:38.317855  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:38.317911  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:38.342419  303437 cri.go:89] found id: ""
	I1210 07:07:38.342447  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.342465  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:38.342473  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:38.342545  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:38.367206  303437 cri.go:89] found id: ""
	I1210 07:07:38.367271  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.367295  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:38.367316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:38.367408  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:38.395595  303437 cri.go:89] found id: ""
	I1210 07:07:38.395617  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.395626  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:38.395635  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:38.395646  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:38.455465  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:38.455496  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:38.469974  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:38.470052  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:38.534901  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:38.526759    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.527529    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529111    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529677    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.531428    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:38.526759    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.527529    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529111    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529677    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.531428    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:38.534975  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:38.535033  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:38.560101  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:38.560133  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:41.091155  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:41.101359  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:41.101439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:41.124928  303437 cri.go:89] found id: ""
	I1210 07:07:41.124950  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.124958  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:41.124964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:41.125021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:41.150502  303437 cri.go:89] found id: ""
	I1210 07:07:41.150525  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.150534  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:41.150541  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:41.150597  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:41.175254  303437 cri.go:89] found id: ""
	I1210 07:07:41.175280  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.175289  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:41.175295  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:41.175355  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:41.213279  303437 cri.go:89] found id: ""
	I1210 07:07:41.213302  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.213311  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:41.213317  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:41.213376  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:41.241895  303437 cri.go:89] found id: ""
	I1210 07:07:41.241922  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.241931  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:41.241938  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:41.241997  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:41.266233  303437 cri.go:89] found id: ""
	I1210 07:07:41.266259  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.266274  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:41.266280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:41.266375  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:41.295481  303437 cri.go:89] found id: ""
	I1210 07:07:41.295503  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.295512  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:41.295519  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:41.295586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:41.325350  303437 cri.go:89] found id: ""
	I1210 07:07:41.325372  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.325381  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:41.325390  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:41.325402  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:41.381086  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:41.381121  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:41.394364  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:41.394411  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:41.475813  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:41.467819    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.468574    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.470350    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.471004    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.472517    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:41.467819    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.468574    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.470350    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.471004    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.472517    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:41.475836  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:41.475849  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:41.500717  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:41.500751  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:44.031462  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:44.042099  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:44.042173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:44.066643  303437 cri.go:89] found id: ""
	I1210 07:07:44.066674  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.066683  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:44.066689  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:44.066752  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:44.091511  303437 cri.go:89] found id: ""
	I1210 07:07:44.091533  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.091542  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:44.091548  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:44.091627  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:44.116433  303437 cri.go:89] found id: ""
	I1210 07:07:44.116455  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.116464  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:44.116470  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:44.116527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:44.141546  303437 cri.go:89] found id: ""
	I1210 07:07:44.141568  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.141576  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:44.141583  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:44.141659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:44.183580  303437 cri.go:89] found id: ""
	I1210 07:07:44.183602  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.183610  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:44.183616  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:44.183673  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:44.214628  303437 cri.go:89] found id: ""
	I1210 07:07:44.214651  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.214659  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:44.214666  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:44.214738  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:44.241699  303437 cri.go:89] found id: ""
	I1210 07:07:44.241721  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.241729  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:44.241736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:44.241805  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:44.266706  303437 cri.go:89] found id: ""
	I1210 07:07:44.266729  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.266737  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:44.266746  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:44.266758  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:44.321835  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:44.321867  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:44.335089  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:44.335120  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:44.395294  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:44.387779    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.388344    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389371    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389875    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.391491    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:44.387779    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.388344    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389371    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389875    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.391491    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:44.395360  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:44.395388  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:44.425916  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:44.425956  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:46.965660  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:46.976149  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:46.976221  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:47.003597  303437 cri.go:89] found id: ""
	I1210 07:07:47.003620  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.003629  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:47.003636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:47.003709  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:47.028196  303437 cri.go:89] found id: ""
	I1210 07:07:47.028218  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.028226  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:47.028232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:47.028290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:47.056800  303437 cri.go:89] found id: ""
	I1210 07:07:47.056824  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.056833  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:47.056840  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:47.056916  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:47.081593  303437 cri.go:89] found id: ""
	I1210 07:07:47.081656  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.081678  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:47.081697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:47.081767  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:47.110385  303437 cri.go:89] found id: ""
	I1210 07:07:47.110451  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.110474  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:47.110492  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:47.110563  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:47.136398  303437 cri.go:89] found id: ""
	I1210 07:07:47.136465  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.136490  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:47.136503  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:47.136576  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:47.162521  303437 cri.go:89] found id: ""
	I1210 07:07:47.162545  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.162554  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:47.162560  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:47.162617  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:47.200031  303437 cri.go:89] found id: ""
	I1210 07:07:47.200052  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.200060  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:47.200069  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:47.200080  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:47.240172  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:47.240197  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:47.295589  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:47.295625  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:47.308817  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:47.308843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:47.373455  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:47.365473    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.366342    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.367939    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.368531    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.370138    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:47.365473    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.366342    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.367939    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.368531    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.370138    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:47.373479  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:47.373504  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:47.918542  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:07:48.000256  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:48.000468  303437 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:49.243254  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:07:49.300794  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:49.300885  303437 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:49.898427  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:49.908683  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:49.908754  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:49.934109  303437 cri.go:89] found id: ""
	I1210 07:07:49.934136  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.934145  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:49.934152  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:49.934214  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:49.959202  303437 cri.go:89] found id: ""
	I1210 07:07:49.959226  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.959235  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:49.959252  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:49.959329  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:49.983331  303437 cri.go:89] found id: ""
	I1210 07:07:49.983356  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.983364  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:49.983371  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:49.983427  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:50.012230  303437 cri.go:89] found id: ""
	I1210 07:07:50.012265  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.012274  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:50.012281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:50.012350  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:50.039851  303437 cri.go:89] found id: ""
	I1210 07:07:50.039880  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.039889  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:50.039895  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:50.039962  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:50.071162  303437 cri.go:89] found id: ""
	I1210 07:07:50.071186  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.071195  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:50.071201  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:50.071265  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:50.097095  303437 cri.go:89] found id: ""
	I1210 07:07:50.097118  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.097127  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:50.097134  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:50.097198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:50.121941  303437 cri.go:89] found id: ""
	I1210 07:07:50.121966  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.121976  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:50.121985  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:50.121998  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:50.178251  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:50.178286  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:50.195455  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:50.195491  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:50.283052  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:50.274829    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.275404    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277130    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277736    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.279468    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:50.274829    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.275404    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277130    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277736    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.279468    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:50.283077  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:50.283098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:50.309433  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:50.309472  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:52.837493  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:52.848301  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:52.848370  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:52.872661  303437 cri.go:89] found id: ""
	I1210 07:07:52.872682  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.872690  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:52.872696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:52.872755  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:52.895064  303437 cri.go:89] found id: ""
	I1210 07:07:52.895090  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.895100  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:52.895112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:52.895170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:52.918926  303437 cri.go:89] found id: ""
	I1210 07:07:52.918950  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.918958  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:52.918964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:52.919038  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:52.942801  303437 cri.go:89] found id: ""
	I1210 07:07:52.942823  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.942831  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:52.942838  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:52.942895  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:52.968885  303437 cri.go:89] found id: ""
	I1210 07:07:52.968910  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.968919  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:52.968925  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:52.968984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:52.992050  303437 cri.go:89] found id: ""
	I1210 07:07:52.992072  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.992080  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:52.992087  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:52.992145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:53.020481  303437 cri.go:89] found id: ""
	I1210 07:07:53.020507  303437 logs.go:282] 0 containers: []
	W1210 07:07:53.020516  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:53.020523  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:53.020586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:53.045391  303437 cri.go:89] found id: ""
	I1210 07:07:53.045412  303437 logs.go:282] 0 containers: []
	W1210 07:07:53.045421  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:53.045430  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:53.045441  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:53.100408  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:53.100444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:53.115165  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:53.115192  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:53.192011  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:53.181167    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.181972    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.182929    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.185331    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.186084    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:53.181167    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.181972    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.182929    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.185331    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.186084    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:53.192034  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:53.192049  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:53.220495  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:53.220572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:55.749081  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:55.759242  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:55.759314  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:55.782656  303437 cri.go:89] found id: ""
	I1210 07:07:55.782681  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.782690  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:55.782707  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:55.782766  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:55.807483  303437 cri.go:89] found id: ""
	I1210 07:07:55.807509  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.807527  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:55.807534  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:55.807595  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:55.832851  303437 cri.go:89] found id: ""
	I1210 07:07:55.832887  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.832896  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:55.832906  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:55.832966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:55.857553  303437 cri.go:89] found id: ""
	I1210 07:07:55.857575  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.857584  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:55.857591  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:55.857653  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:55.885207  303437 cri.go:89] found id: ""
	I1210 07:07:55.885230  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.885240  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:55.885246  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:55.885315  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:55.909296  303437 cri.go:89] found id: ""
	I1210 07:07:55.909322  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.909332  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:55.909340  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:55.909398  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:55.933701  303437 cri.go:89] found id: ""
	I1210 07:07:55.933723  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.933733  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:55.933740  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:55.933812  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:55.958095  303437 cri.go:89] found id: ""
	I1210 07:07:55.958121  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.958130  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:55.958139  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:55.958150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:56.028949  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:56.021322    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.021920    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023068    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023441    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.025194    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:56.021322    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.021920    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023068    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023441    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.025194    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:56.028976  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:56.029046  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:56.055269  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:56.055308  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:56.087408  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:56.087438  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:56.143537  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:56.143570  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:58.657737  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:58.669685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:58.669751  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:58.704925  303437 cri.go:89] found id: ""
	I1210 07:07:58.704947  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.704955  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:58.704962  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:58.705021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:58.732775  303437 cri.go:89] found id: ""
	I1210 07:07:58.732798  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.732806  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:58.732812  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:58.732871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:58.757863  303437 cri.go:89] found id: ""
	I1210 07:07:58.757885  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.757893  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:58.757899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:58.757957  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:58.782893  303437 cri.go:89] found id: ""
	I1210 07:07:58.782914  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.782923  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:58.782929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:58.782987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:58.813425  303437 cri.go:89] found id: ""
	I1210 07:07:58.813458  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.813467  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:58.813474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:58.813531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:58.837894  303437 cri.go:89] found id: ""
	I1210 07:07:58.837920  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.837930  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:58.837937  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:58.837994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:58.862767  303437 cri.go:89] found id: ""
	I1210 07:07:58.862793  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.862803  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:58.862810  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:58.862871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:58.887161  303437 cri.go:89] found id: ""
	I1210 07:07:58.887190  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.887203  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:58.887213  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:58.887226  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:58.912742  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:58.912774  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:58.941751  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:58.941778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:58.997499  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:58.997538  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:59.012690  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:59.012716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:59.079032  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:59.071332    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.071853    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.073549    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.074116    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.075663    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:59.071332    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.071853    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.073549    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.074116    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.075663    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:59.173255  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:07:59.241772  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:59.241906  303437 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:59.245162  303437 out.go:179] * Enabled addons: 
	I1210 07:07:59.248019  303437 addons.go:530] duration metric: took 1m50.382393488s for enable addons: enabled=[]
	I1210 07:08:01.579277  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:01.590395  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:01.590469  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:01.616988  303437 cri.go:89] found id: ""
	I1210 07:08:01.617017  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.617025  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:01.617032  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:01.617095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:01.643533  303437 cri.go:89] found id: ""
	I1210 07:08:01.643555  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.643563  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:01.643570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:01.643633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:01.683402  303437 cri.go:89] found id: ""
	I1210 07:08:01.683430  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.683439  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:01.683446  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:01.683507  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:01.714420  303437 cri.go:89] found id: ""
	I1210 07:08:01.714448  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.714457  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:01.714463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:01.714522  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:01.741588  303437 cri.go:89] found id: ""
	I1210 07:08:01.741614  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.741625  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:01.741632  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:01.741697  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:01.766133  303437 cri.go:89] found id: ""
	I1210 07:08:01.766163  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.766172  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:01.766178  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:01.766246  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:01.796151  303437 cri.go:89] found id: ""
	I1210 07:08:01.796173  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.796181  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:01.796188  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:01.796253  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:01.821826  303437 cri.go:89] found id: ""
	I1210 07:08:01.821848  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.821857  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:01.821872  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:01.821883  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:01.856135  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:01.856162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:01.912548  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:01.912582  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:01.926252  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:01.926279  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:01.989471  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:01.981372    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.982086    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.983797    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.984477    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.986161    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:01.981372    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.982086    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.983797    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.984477    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.986161    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:01.989491  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:01.989504  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:04.519169  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:04.529774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:04.529853  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:04.557926  303437 cri.go:89] found id: ""
	I1210 07:08:04.557950  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.557967  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:04.557988  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:04.558067  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:04.585171  303437 cri.go:89] found id: ""
	I1210 07:08:04.585195  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.585204  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:04.585223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:04.585292  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:04.613695  303437 cri.go:89] found id: ""
	I1210 07:08:04.613720  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.613729  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:04.613735  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:04.613808  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:04.637775  303437 cri.go:89] found id: ""
	I1210 07:08:04.637859  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.637880  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:04.637899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:04.637989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:04.673966  303437 cri.go:89] found id: ""
	I1210 07:08:04.674033  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.674057  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:04.674073  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:04.674161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:04.706760  303437 cri.go:89] found id: ""
	I1210 07:08:04.706825  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.706846  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:04.706865  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:04.706955  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:04.748640  303437 cri.go:89] found id: ""
	I1210 07:08:04.748707  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.748731  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:04.748749  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:04.748837  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:04.778179  303437 cri.go:89] found id: ""
	I1210 07:08:04.778241  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.778263  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:04.778283  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:04.778324  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:04.838994  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:04.839038  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:04.852663  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:04.852737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:04.919247  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:04.910928    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.911685    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913313    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913950    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.915537    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:04.910928    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.911685    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913313    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913950    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.915537    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:04.919311  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:04.919346  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:04.944409  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:04.944441  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:07.475233  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:07.485817  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:07.485889  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:07.510450  303437 cri.go:89] found id: ""
	I1210 07:08:07.510473  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.510482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:07.510488  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:07.510549  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:07.536516  303437 cri.go:89] found id: ""
	I1210 07:08:07.536541  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.536550  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:07.536556  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:07.536646  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:07.561868  303437 cri.go:89] found id: ""
	I1210 07:08:07.561893  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.561902  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:07.561908  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:07.561987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:07.590197  303437 cri.go:89] found id: ""
	I1210 07:08:07.590221  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.590230  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:07.590236  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:07.590342  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:07.613514  303437 cri.go:89] found id: ""
	I1210 07:08:07.613539  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.613548  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:07.613555  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:07.613662  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:07.638377  303437 cri.go:89] found id: ""
	I1210 07:08:07.638402  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.638410  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:07.638417  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:07.638477  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:07.667985  303437 cri.go:89] found id: ""
	I1210 07:08:07.668058  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.668082  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:07.668102  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:07.668189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:07.698530  303437 cri.go:89] found id: ""
	I1210 07:08:07.698605  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.698647  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:07.698671  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:07.698710  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:07.761708  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:07.761745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:07.775951  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:07.775978  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:07.842158  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:07.833714    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.834583    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836355    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836801    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.838463    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:07.833714    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.834583    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836355    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836801    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.838463    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:07.842183  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:07.842200  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:07.868656  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:07.868693  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:10.398249  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:10.410905  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:10.410974  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:10.441450  303437 cri.go:89] found id: ""
	I1210 07:08:10.441474  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.441482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:10.441489  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:10.441551  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:10.467324  303437 cri.go:89] found id: ""
	I1210 07:08:10.467345  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.467354  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:10.467360  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:10.467422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:10.490980  303437 cri.go:89] found id: ""
	I1210 07:08:10.491001  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.491117  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:10.491125  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:10.491186  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:10.515608  303437 cri.go:89] found id: ""
	I1210 07:08:10.515673  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.515688  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:10.515696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:10.515753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:10.540198  303437 cri.go:89] found id: ""
	I1210 07:08:10.540223  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.540232  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:10.540246  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:10.540304  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:10.565060  303437 cri.go:89] found id: ""
	I1210 07:08:10.565125  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.565140  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:10.565155  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:10.565219  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:10.593396  303437 cri.go:89] found id: ""
	I1210 07:08:10.593430  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.593438  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:10.593445  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:10.593510  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:10.617363  303437 cri.go:89] found id: ""
	I1210 07:08:10.617395  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.617405  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:10.617414  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:10.617426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:10.677240  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:10.677317  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:10.692150  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:10.692220  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:10.758835  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:10.750046    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.750802    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.753608    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.754055    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.755555    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:10.750046    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.750802    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.753608    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.754055    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.755555    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:10.758906  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:10.758934  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:10.783900  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:10.783935  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:13.316158  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:13.326768  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:13.326841  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:13.354375  303437 cri.go:89] found id: ""
	I1210 07:08:13.354402  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.354411  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:13.354417  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:13.354486  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:13.379439  303437 cri.go:89] found id: ""
	I1210 07:08:13.379467  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.379479  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:13.379491  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:13.379572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:13.406403  303437 cri.go:89] found id: ""
	I1210 07:08:13.406425  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.406433  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:13.406439  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:13.406498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:13.441528  303437 cri.go:89] found id: ""
	I1210 07:08:13.441633  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.441665  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:13.441698  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:13.441887  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:13.485367  303437 cri.go:89] found id: ""
	I1210 07:08:13.485407  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.485416  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:13.485423  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:13.485491  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:13.515544  303437 cri.go:89] found id: ""
	I1210 07:08:13.515572  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.515582  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:13.515588  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:13.515646  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:13.541572  303437 cri.go:89] found id: ""
	I1210 07:08:13.541604  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.541613  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:13.541620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:13.541692  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:13.566335  303437 cri.go:89] found id: ""
	I1210 07:08:13.566366  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.566376  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:13.566385  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:13.566396  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:13.622359  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:13.622391  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:13.635632  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:13.635661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:13.716667  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:13.707886    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.708774    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.710652    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.711407    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.713108    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:13.707886    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.708774    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.710652    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.711407    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.713108    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:13.716691  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:13.716711  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:13.743967  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:13.744002  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:16.273094  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:16.283420  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:16.283488  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:16.307336  303437 cri.go:89] found id: ""
	I1210 07:08:16.307358  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.307366  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:16.307373  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:16.307430  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:16.330448  303437 cri.go:89] found id: ""
	I1210 07:08:16.330476  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.330485  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:16.330492  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:16.330552  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:16.362050  303437 cri.go:89] found id: ""
	I1210 07:08:16.362080  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.362089  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:16.362096  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:16.362172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:16.385708  303437 cri.go:89] found id: ""
	I1210 07:08:16.385732  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.385741  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:16.385747  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:16.385852  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:16.421398  303437 cri.go:89] found id: ""
	I1210 07:08:16.421427  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.421436  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:16.421442  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:16.421509  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:16.449046  303437 cri.go:89] found id: ""
	I1210 07:08:16.449074  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.449082  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:16.449089  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:16.449166  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:16.475499  303437 cri.go:89] found id: ""
	I1210 07:08:16.475525  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.475534  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:16.475541  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:16.475619  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:16.502476  303437 cri.go:89] found id: ""
	I1210 07:08:16.502506  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.502515  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:16.502524  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:16.502535  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:16.530854  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:16.530929  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:16.586993  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:16.587030  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:16.600337  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:16.600364  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:16.669775  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:16.660570    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.661341    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.663229    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.664010    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.665709    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:16.660570    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.661341    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.663229    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.664010    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.665709    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:16.669849  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:16.669875  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:19.199141  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:19.209670  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:19.209739  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:19.242748  303437 cri.go:89] found id: ""
	I1210 07:08:19.242775  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.242784  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:19.242791  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:19.242849  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:19.266957  303437 cri.go:89] found id: ""
	I1210 07:08:19.266980  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.266989  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:19.266995  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:19.267066  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:19.293252  303437 cri.go:89] found id: ""
	I1210 07:08:19.293276  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.293285  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:19.293292  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:19.293349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:19.318070  303437 cri.go:89] found id: ""
	I1210 07:08:19.318096  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.318105  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:19.318112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:19.318171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:19.341744  303437 cri.go:89] found id: ""
	I1210 07:08:19.341769  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.341783  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:19.341789  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:19.341847  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:19.366605  303437 cri.go:89] found id: ""
	I1210 07:08:19.366632  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.366641  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:19.366648  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:19.366706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:19.393536  303437 cri.go:89] found id: ""
	I1210 07:08:19.393561  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.393570  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:19.393576  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:19.393633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:19.422513  303437 cri.go:89] found id: ""
	I1210 07:08:19.422535  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.422546  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:19.422556  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:19.422566  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:19.453046  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:19.453118  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:19.488889  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:19.488918  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:19.547224  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:19.547259  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:19.562006  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:19.562035  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:19.625530  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:19.617363    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.618299    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620102    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620530    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.622148    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:19.617363    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.618299    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620102    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620530    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.622148    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:22.125860  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:22.136477  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:22.136550  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:22.164763  303437 cri.go:89] found id: ""
	I1210 07:08:22.164786  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.164795  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:22.164801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:22.164861  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:22.190879  303437 cri.go:89] found id: ""
	I1210 07:08:22.190900  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.190909  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:22.190915  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:22.190973  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:22.215247  303437 cri.go:89] found id: ""
	I1210 07:08:22.215278  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.215286  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:22.215292  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:22.215351  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:22.239059  303437 cri.go:89] found id: ""
	I1210 07:08:22.239086  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.239095  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:22.239102  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:22.239163  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:22.264259  303437 cri.go:89] found id: ""
	I1210 07:08:22.264284  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.264293  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:22.264299  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:22.264357  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:22.289890  303437 cri.go:89] found id: ""
	I1210 07:08:22.289913  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.289923  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:22.289929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:22.289987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:22.317025  303437 cri.go:89] found id: ""
	I1210 07:08:22.317051  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.317060  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:22.317067  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:22.317124  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:22.341933  303437 cri.go:89] found id: ""
	I1210 07:08:22.341965  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.341974  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:22.341992  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:22.342004  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:22.398310  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:22.398344  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:22.413479  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:22.413520  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:22.490851  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:22.482047    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.482772    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.484530    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.485118    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.486931    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:22.482047    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.482772    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.484530    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.485118    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.486931    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:22.490873  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:22.490888  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:22.518860  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:22.518891  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:25.049142  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:25.060069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:25.060142  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:25.089203  303437 cri.go:89] found id: ""
	I1210 07:08:25.089232  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.089242  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:25.089248  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:25.089317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:25.118751  303437 cri.go:89] found id: ""
	I1210 07:08:25.118776  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.118785  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:25.118791  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:25.118848  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:25.143129  303437 cri.go:89] found id: ""
	I1210 07:08:25.143163  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.143173  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:25.143179  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:25.143240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:25.169805  303437 cri.go:89] found id: ""
	I1210 07:08:25.169830  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.169839  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:25.169846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:25.169905  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:25.194716  303437 cri.go:89] found id: ""
	I1210 07:08:25.194743  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.194752  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:25.194759  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:25.194818  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:25.221104  303437 cri.go:89] found id: ""
	I1210 07:08:25.221127  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.221135  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:25.221141  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:25.221199  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:25.249738  303437 cri.go:89] found id: ""
	I1210 07:08:25.249762  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.249771  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:25.249784  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:25.249842  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:25.273527  303437 cri.go:89] found id: ""
	I1210 07:08:25.273552  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.273562  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:25.273572  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:25.273583  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:25.298962  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:25.298996  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:25.326742  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:25.326770  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:25.381274  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:25.381307  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:25.394260  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:25.394289  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:25.485635  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:25.478200    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.479077    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.480640    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.481256    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.482290    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:25.478200    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.479077    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.480640    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.481256    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.482290    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:27.987151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:28.000081  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:28.000164  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:28.025871  303437 cri.go:89] found id: ""
	I1210 07:08:28.025896  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.025904  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:28.025917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:28.025978  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:28.050799  303437 cri.go:89] found id: ""
	I1210 07:08:28.050822  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.050831  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:28.050837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:28.050902  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:28.075890  303437 cri.go:89] found id: ""
	I1210 07:08:28.075912  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.075921  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:28.075928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:28.075988  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:28.100461  303437 cri.go:89] found id: ""
	I1210 07:08:28.100483  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.100492  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:28.100499  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:28.100555  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:28.126583  303437 cri.go:89] found id: ""
	I1210 07:08:28.126607  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.126617  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:28.126623  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:28.126682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:28.156736  303437 cri.go:89] found id: ""
	I1210 07:08:28.156758  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.156767  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:28.156774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:28.156837  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:28.181562  303437 cri.go:89] found id: ""
	I1210 07:08:28.181635  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.181657  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:28.181675  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:28.181760  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:28.206007  303437 cri.go:89] found id: ""
	I1210 07:08:28.206081  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.206106  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:28.206127  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:28.206163  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:28.219409  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:28.219445  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:28.285367  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:28.277994    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.278613    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280604    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.282182    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:28.277994    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.278613    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280604    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.282182    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:28.285387  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:28.285399  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:28.310115  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:28.310150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:28.337400  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:28.337427  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:30.895800  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:30.906215  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:30.906285  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:30.940989  303437 cri.go:89] found id: ""
	I1210 07:08:30.941016  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.941025  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:30.941031  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:30.941089  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:30.968174  303437 cri.go:89] found id: ""
	I1210 07:08:30.968196  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.968205  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:30.968211  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:30.968267  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:30.997147  303437 cri.go:89] found id: ""
	I1210 07:08:30.997181  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.997191  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:30.997198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:30.997324  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:31.027985  303437 cri.go:89] found id: ""
	I1210 07:08:31.028024  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.028033  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:31.028039  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:31.028101  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:31.052662  303437 cri.go:89] found id: ""
	I1210 07:08:31.052684  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.052693  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:31.052699  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:31.052760  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:31.078026  303437 cri.go:89] found id: ""
	I1210 07:08:31.078051  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.078060  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:31.078067  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:31.078129  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:31.106108  303437 cri.go:89] found id: ""
	I1210 07:08:31.106135  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.106144  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:31.106150  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:31.106212  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:31.133109  303437 cri.go:89] found id: ""
	I1210 07:08:31.133133  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.133141  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:31.133150  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:31.133162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:31.158330  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:31.158369  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:31.190546  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:31.190570  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:31.245193  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:31.245228  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:31.258848  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:31.258882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:31.332332  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:31.324865    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.325506    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327103    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327745    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.329226    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:31.324865    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.325506    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327103    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327745    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.329226    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:33.832563  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:33.843389  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:33.843462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:33.868588  303437 cri.go:89] found id: ""
	I1210 07:08:33.868612  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.868621  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:33.868627  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:33.868691  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:33.893467  303437 cri.go:89] found id: ""
	I1210 07:08:33.893492  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.893501  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:33.893507  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:33.893568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:33.925853  303437 cri.go:89] found id: ""
	I1210 07:08:33.925883  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.925892  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:33.925899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:33.925961  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:33.957483  303437 cri.go:89] found id: ""
	I1210 07:08:33.957507  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.957516  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:33.957523  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:33.957582  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:33.990903  303437 cri.go:89] found id: ""
	I1210 07:08:33.990927  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.990937  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:33.990943  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:33.991005  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:34.017222  303437 cri.go:89] found id: ""
	I1210 07:08:34.017249  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.017258  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:34.017264  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:34.017346  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:34.043888  303437 cri.go:89] found id: ""
	I1210 07:08:34.043913  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.043921  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:34.043928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:34.044001  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:34.069229  303437 cri.go:89] found id: ""
	I1210 07:08:34.069299  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.069314  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:34.069325  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:34.069337  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:34.127059  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:34.127093  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:34.140507  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:34.140537  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:34.205618  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:34.198206    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.198939    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200406    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200898    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.202347    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:34.198206    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.198939    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200406    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200898    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.202347    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:34.205639  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:34.205651  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:34.230228  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:34.230258  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:36.756574  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:36.768692  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:36.768761  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:36.791900  303437 cri.go:89] found id: ""
	I1210 07:08:36.791922  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.791930  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:36.791936  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:36.791994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:36.818662  303437 cri.go:89] found id: ""
	I1210 07:08:36.818683  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.818691  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:36.818697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:36.818753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:36.846695  303437 cri.go:89] found id: ""
	I1210 07:08:36.846718  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.846727  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:36.846733  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:36.846794  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:36.870384  303437 cri.go:89] found id: ""
	I1210 07:08:36.870408  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.870417  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:36.870423  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:36.870486  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:36.895312  303437 cri.go:89] found id: ""
	I1210 07:08:36.895335  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.895343  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:36.895349  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:36.895408  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:36.926574  303437 cri.go:89] found id: ""
	I1210 07:08:36.926602  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.926611  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:36.926617  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:36.926684  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:36.956760  303437 cri.go:89] found id: ""
	I1210 07:08:36.956786  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.956795  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:36.956801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:36.956864  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:36.983460  303437 cri.go:89] found id: ""
	I1210 07:08:36.983480  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.983488  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:36.983497  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:36.983512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:37.039889  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:37.039926  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:37.053431  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:37.053508  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:37.117639  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:37.109822    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.110762    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.111850    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.112355    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.113887    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:37.109822    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.110762    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.111850    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.112355    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.113887    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:37.117660  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:37.117673  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:37.148315  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:37.148357  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:39.681355  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:39.695207  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:39.695290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:39.725514  303437 cri.go:89] found id: ""
	I1210 07:08:39.725547  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.725556  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:39.725563  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:39.725632  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:39.750801  303437 cri.go:89] found id: ""
	I1210 07:08:39.750834  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.750844  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:39.750850  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:39.750920  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:39.775756  303437 cri.go:89] found id: ""
	I1210 07:08:39.775779  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.775788  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:39.775794  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:39.775853  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:39.805059  303437 cri.go:89] found id: ""
	I1210 07:08:39.805085  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.805094  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:39.805100  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:39.805158  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:39.829219  303437 cri.go:89] found id: ""
	I1210 07:08:39.829284  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.829301  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:39.829309  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:39.829371  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:39.858144  303437 cri.go:89] found id: ""
	I1210 07:08:39.858168  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.858177  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:39.858184  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:39.858243  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:39.886805  303437 cri.go:89] found id: ""
	I1210 07:08:39.886838  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.886846  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:39.886853  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:39.886919  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:39.918064  303437 cri.go:89] found id: ""
	I1210 07:08:39.918089  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.918099  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:39.918108  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:39.918119  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:39.982343  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:39.982418  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:39.995829  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:39.995854  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:40.078976  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:40.070049    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.070741    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.072500    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.073281    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.074971    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:40.070049    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.070741    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.072500    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.073281    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.074971    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:40.079001  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:40.079033  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:40.105734  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:40.105778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:42.635583  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:42.646316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:42.646387  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:42.687725  303437 cri.go:89] found id: ""
	I1210 07:08:42.687746  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.687755  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:42.687761  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:42.687821  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:42.731127  303437 cri.go:89] found id: ""
	I1210 07:08:42.731148  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.731157  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:42.731163  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:42.731224  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:42.761187  303437 cri.go:89] found id: ""
	I1210 07:08:42.761218  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.761227  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:42.761232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:42.761293  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:42.789156  303437 cri.go:89] found id: ""
	I1210 07:08:42.789184  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.789193  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:42.789200  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:42.789259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:42.813508  303437 cri.go:89] found id: ""
	I1210 07:08:42.813533  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.813542  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:42.813548  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:42.813607  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:42.838567  303437 cri.go:89] found id: ""
	I1210 07:08:42.838591  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.838601  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:42.838608  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:42.838667  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:42.862315  303437 cri.go:89] found id: ""
	I1210 07:08:42.862340  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.862348  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:42.862355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:42.862415  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:42.888411  303437 cri.go:89] found id: ""
	I1210 07:08:42.888486  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.888502  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:42.888513  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:42.888526  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:42.950009  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:42.950042  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:42.965591  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:42.965617  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:43.040631  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:43.032737    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.033256    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035076    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035768    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.037307    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:43.032737    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.033256    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035076    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035768    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.037307    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:43.040653  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:43.040667  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:43.067163  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:43.067197  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:45.596845  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:45.607484  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:45.607551  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:45.631812  303437 cri.go:89] found id: ""
	I1210 07:08:45.631841  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.631851  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:45.631857  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:45.631916  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:45.656686  303437 cri.go:89] found id: ""
	I1210 07:08:45.656709  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.656717  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:45.656724  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:45.656782  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:45.705244  303437 cri.go:89] found id: ""
	I1210 07:08:45.705270  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.705279  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:45.705286  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:45.705349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:45.733649  303437 cri.go:89] found id: ""
	I1210 07:08:45.733671  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.733679  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:45.733685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:45.733748  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:45.758319  303437 cri.go:89] found id: ""
	I1210 07:08:45.758340  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.758349  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:45.758355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:45.758416  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:45.782339  303437 cri.go:89] found id: ""
	I1210 07:08:45.782360  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.782369  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:45.782375  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:45.782434  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:45.806598  303437 cri.go:89] found id: ""
	I1210 07:08:45.806624  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.806633  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:45.806640  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:45.806700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:45.830909  303437 cri.go:89] found id: ""
	I1210 07:08:45.830933  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.830942  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:45.830951  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:45.830962  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:45.859118  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:45.859148  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:45.920835  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:45.920869  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:45.935529  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:45.935555  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:46.015051  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:46.007172    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.007866    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.009596    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.010127    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.011638    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:46.007172    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.007866    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.009596    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.010127    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.011638    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:46.015073  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:46.015086  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:48.541223  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:48.551805  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:48.551874  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:48.576818  303437 cri.go:89] found id: ""
	I1210 07:08:48.576878  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.576891  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:48.576898  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:48.576963  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:48.601980  303437 cri.go:89] found id: ""
	I1210 07:08:48.602005  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.602014  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:48.602020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:48.602082  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:48.634301  303437 cri.go:89] found id: ""
	I1210 07:08:48.634324  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.634333  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:48.634339  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:48.634399  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:48.665296  303437 cri.go:89] found id: ""
	I1210 07:08:48.665321  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.665330  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:48.665336  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:48.665395  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:48.696396  303437 cri.go:89] found id: ""
	I1210 07:08:48.696421  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.696430  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:48.696437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:48.696500  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:48.732263  303437 cri.go:89] found id: ""
	I1210 07:08:48.732288  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.732297  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:48.732304  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:48.732365  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:48.759127  303437 cri.go:89] found id: ""
	I1210 07:08:48.759152  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.759161  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:48.759170  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:48.759229  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:48.783999  303437 cri.go:89] found id: ""
	I1210 07:08:48.784077  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.784100  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:48.784116  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:48.784141  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:48.797102  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:48.797132  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:48.859523  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:48.852279    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.852826    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854371    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854816    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.856244    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:48.852279    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.852826    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854371    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854816    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.856244    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:48.859546  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:48.859560  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:48.884680  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:48.884714  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:48.923070  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:48.923098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:51.485606  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:51.496059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:51.496133  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:51.521404  303437 cri.go:89] found id: ""
	I1210 07:08:51.521429  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.521438  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:51.521444  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:51.521504  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:51.546743  303437 cri.go:89] found id: ""
	I1210 07:08:51.546768  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.546777  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:51.546785  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:51.546847  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:51.577064  303437 cri.go:89] found id: ""
	I1210 07:08:51.577089  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.577099  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:51.577105  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:51.577171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:51.602384  303437 cri.go:89] found id: ""
	I1210 07:08:51.602410  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.602420  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:51.602426  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:51.602484  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:51.630338  303437 cri.go:89] found id: ""
	I1210 07:08:51.630367  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.630375  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:51.630382  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:51.630440  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:51.660663  303437 cri.go:89] found id: ""
	I1210 07:08:51.660691  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.660700  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:51.660706  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:51.660765  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:51.689142  303437 cri.go:89] found id: ""
	I1210 07:08:51.689170  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.689179  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:51.689186  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:51.689246  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:51.723765  303437 cri.go:89] found id: ""
	I1210 07:08:51.723792  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.723800  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:51.723810  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:51.723824  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:51.781842  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:51.781873  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:51.795845  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:51.795872  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:51.863519  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:51.855577    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.856333    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858048    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858719    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.860050    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:51.855577    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.856333    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858048    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858719    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.860050    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:51.863583  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:51.863611  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:51.888478  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:51.888510  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:54.421755  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:54.432308  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:54.432377  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:54.458171  303437 cri.go:89] found id: ""
	I1210 07:08:54.458194  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.458209  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:54.458216  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:54.458279  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:54.485658  303437 cri.go:89] found id: ""
	I1210 07:08:54.485689  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.485698  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:54.485704  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:54.485763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:54.514257  303437 cri.go:89] found id: ""
	I1210 07:08:54.514279  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.514287  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:54.514294  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:54.514360  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:54.538966  303437 cri.go:89] found id: ""
	I1210 07:08:54.539053  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.539078  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:54.539096  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:54.539182  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:54.563486  303437 cri.go:89] found id: ""
	I1210 07:08:54.563512  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.563521  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:54.563528  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:54.563588  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:54.588780  303437 cri.go:89] found id: ""
	I1210 07:08:54.588805  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.588814  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:54.588827  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:54.588886  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:54.618322  303437 cri.go:89] found id: ""
	I1210 07:08:54.618346  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.618356  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:54.618362  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:54.618421  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:54.643564  303437 cri.go:89] found id: ""
	I1210 07:08:54.643592  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.643602  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:54.643612  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:54.643624  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:54.683994  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:54.684069  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:54.743900  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:54.743934  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:54.757240  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:54.757266  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:54.820795  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:54.813522    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.813935    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.815612    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.816020    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.817550    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:54.813522    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.813935    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.815612    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.816020    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.817550    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:54.820815  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:54.820830  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:57.345608  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:57.358499  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:57.358625  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:57.384563  303437 cri.go:89] found id: ""
	I1210 07:08:57.384589  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.384598  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:57.384604  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:57.384682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:57.408236  303437 cri.go:89] found id: ""
	I1210 07:08:57.408263  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.408272  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:57.408279  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:57.408337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:57.432014  303437 cri.go:89] found id: ""
	I1210 07:08:57.432037  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.432045  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:57.432052  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:57.432111  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:57.455970  303437 cri.go:89] found id: ""
	I1210 07:08:57.456046  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.456068  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:57.456088  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:57.456173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:57.480680  303437 cri.go:89] found id: ""
	I1210 07:08:57.480752  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.480767  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:57.480775  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:57.480841  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:57.505993  303437 cri.go:89] found id: ""
	I1210 07:08:57.506026  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.506037  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:57.506043  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:57.506153  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:57.530713  303437 cri.go:89] found id: ""
	I1210 07:08:57.530739  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.530748  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:57.530754  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:57.530814  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:57.555806  303437 cri.go:89] found id: ""
	I1210 07:08:57.555871  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.555897  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:57.555918  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:57.555943  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:57.611292  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:57.611326  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:57.624707  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:57.624735  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:57.707745  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:57.699963    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.701079    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702632    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702942    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.704373    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:57.699963    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.701079    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702632    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702942    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.704373    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:57.707768  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:57.707780  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:57.734701  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:57.734734  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:00.266582  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:00.305476  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:00.305924  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:00.366724  303437 cri.go:89] found id: ""
	I1210 07:09:00.366806  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.366839  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:00.366879  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:00.366992  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:00.396827  303437 cri.go:89] found id: ""
	I1210 07:09:00.396912  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.396939  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:00.396960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:00.397064  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:00.424504  303437 cri.go:89] found id: ""
	I1210 07:09:00.424531  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.424540  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:00.424547  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:00.424609  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:00.453893  303437 cri.go:89] found id: ""
	I1210 07:09:00.453921  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.453931  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:00.453938  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:00.454001  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:00.480406  303437 cri.go:89] found id: ""
	I1210 07:09:00.480432  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.480441  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:00.480448  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:00.480508  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:00.505747  303437 cri.go:89] found id: ""
	I1210 07:09:00.505779  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.505788  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:00.505795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:00.505856  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:00.535288  303437 cri.go:89] found id: ""
	I1210 07:09:00.535311  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.535320  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:00.535326  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:00.535387  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:00.565945  303437 cri.go:89] found id: ""
	I1210 07:09:00.565972  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.565989  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:00.566015  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:00.566034  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:00.596202  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:00.596228  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:00.651714  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:00.651748  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:00.666338  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:00.666375  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:00.745706  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:00.737632    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.738139    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.739647    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.740156    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.741940    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:00.737632    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.738139    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.739647    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.740156    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.741940    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:00.745728  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:00.745742  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:03.272316  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:03.283628  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:03.283695  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:03.309180  303437 cri.go:89] found id: ""
	I1210 07:09:03.309263  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.309285  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:03.309300  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:03.309373  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:03.334971  303437 cri.go:89] found id: ""
	I1210 07:09:03.334994  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.335003  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:03.335035  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:03.335096  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:03.361090  303437 cri.go:89] found id: ""
	I1210 07:09:03.361116  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.361125  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:03.361131  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:03.361189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:03.385067  303437 cri.go:89] found id: ""
	I1210 07:09:03.385141  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.385161  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:03.385169  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:03.385259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:03.420428  303437 cri.go:89] found id: ""
	I1210 07:09:03.420450  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.420459  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:03.420465  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:03.420527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:03.453131  303437 cri.go:89] found id: ""
	I1210 07:09:03.453153  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.453162  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:03.453168  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:03.453281  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:03.485206  303437 cri.go:89] found id: ""
	I1210 07:09:03.485236  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.485245  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:03.485251  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:03.485311  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:03.517204  303437 cri.go:89] found id: ""
	I1210 07:09:03.517229  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.517238  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:03.517253  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:03.517265  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:03.530656  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:03.530728  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:03.596244  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:03.588660    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.589167    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.590688    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.591215    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.592799    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:03.588660    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.589167    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.590688    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.591215    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.592799    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:03.596305  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:03.596342  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:03.621847  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:03.621882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:03.649988  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:03.650024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:06.209516  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:06.219893  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:06.219970  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:06.244763  303437 cri.go:89] found id: ""
	I1210 07:09:06.244786  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.244795  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:06.244801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:06.244862  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:06.271479  303437 cri.go:89] found id: ""
	I1210 07:09:06.271501  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.271509  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:06.271515  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:06.271572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:06.295607  303437 cri.go:89] found id: ""
	I1210 07:09:06.295635  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.295644  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:06.295651  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:06.295706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:06.320774  303437 cri.go:89] found id: ""
	I1210 07:09:06.320798  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.320806  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:06.320823  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:06.320886  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:06.349033  303437 cri.go:89] found id: ""
	I1210 07:09:06.349056  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.349064  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:06.349070  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:06.349127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:06.377330  303437 cri.go:89] found id: ""
	I1210 07:09:06.377352  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.377361  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:06.377367  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:06.377426  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:06.400983  303437 cri.go:89] found id: ""
	I1210 07:09:06.401005  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.401014  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:06.401021  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:06.401080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:06.431299  303437 cri.go:89] found id: ""
	I1210 07:09:06.431327  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.431336  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:06.431345  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:06.431356  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:06.462335  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:06.462369  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:06.495348  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:06.495376  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:06.551592  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:06.551627  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:06.565270  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:06.565305  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:06.629933  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:06.621965    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.622716    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.624429    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.625124    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.626708    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:06.621965    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.622716    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.624429    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.625124    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.626708    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:09.131098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:09.141585  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:09.141658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:09.168859  303437 cri.go:89] found id: ""
	I1210 07:09:09.168889  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.168898  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:09.168904  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:09.168966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:09.193427  303437 cri.go:89] found id: ""
	I1210 07:09:09.193448  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.193457  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:09.193463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:09.193520  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:09.217804  303437 cri.go:89] found id: ""
	I1210 07:09:09.217928  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.217954  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:09.217975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:09.218083  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:09.242204  303437 cri.go:89] found id: ""
	I1210 07:09:09.242277  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.242303  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:09.242322  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:09.242404  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:09.268889  303437 cri.go:89] found id: ""
	I1210 07:09:09.268912  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.268920  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:09.268926  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:09.268984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:09.293441  303437 cri.go:89] found id: ""
	I1210 07:09:09.293514  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.293545  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:09.293563  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:09.293671  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:09.321925  303437 cri.go:89] found id: ""
	I1210 07:09:09.321946  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.321954  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:09.321960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:09.322026  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:09.350603  303437 cri.go:89] found id: ""
	I1210 07:09:09.350623  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.350631  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:09.350641  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:09.350653  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:09.363382  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:09.363409  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:09.429669  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:09.421586    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.422246    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424200    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424743    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.426494    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:09.421586    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.422246    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424200    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424743    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.426494    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:09.429690  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:09.429702  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:09.461410  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:09.461444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:09.500508  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:09.500536  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:12.055555  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:12.066220  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:12.066289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:12.093446  303437 cri.go:89] found id: ""
	I1210 07:09:12.093468  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.093477  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:12.093484  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:12.093543  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:12.119338  303437 cri.go:89] found id: ""
	I1210 07:09:12.119361  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.119370  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:12.119376  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:12.119436  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:12.146532  303437 cri.go:89] found id: ""
	I1210 07:09:12.146553  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.146562  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:12.146568  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:12.146623  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:12.175977  303437 cri.go:89] found id: ""
	I1210 07:09:12.175999  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.176007  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:12.176013  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:12.176072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:12.200557  303437 cri.go:89] found id: ""
	I1210 07:09:12.200579  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.200588  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:12.200595  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:12.200651  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:12.224652  303437 cri.go:89] found id: ""
	I1210 07:09:12.224674  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.224684  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:12.224690  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:12.224750  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:12.249147  303437 cri.go:89] found id: ""
	I1210 07:09:12.249171  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.249180  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:12.249187  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:12.249253  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:12.272500  303437 cri.go:89] found id: ""
	I1210 07:09:12.272535  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.272543  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:12.272553  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:12.272580  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:12.328368  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:12.328399  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:12.341669  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:12.341699  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:12.401653  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:12.394790    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.395266    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396400    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396898    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.398538    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:12.394790    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.395266    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396400    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396898    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.398538    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:12.401708  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:12.401734  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:12.431751  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:12.431791  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:14.963924  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:14.974138  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:14.974206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:15.001054  303437 cri.go:89] found id: ""
	I1210 07:09:15.001080  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.001089  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:15.001097  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:15.001170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:15.040020  303437 cri.go:89] found id: ""
	I1210 07:09:15.040044  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.040053  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:15.040059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:15.040121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:15.065063  303437 cri.go:89] found id: ""
	I1210 07:09:15.065086  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.065095  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:15.065101  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:15.065161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:15.089689  303437 cri.go:89] found id: ""
	I1210 07:09:15.089714  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.089723  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:15.089729  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:15.089797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:15.117422  303437 cri.go:89] found id: ""
	I1210 07:09:15.117446  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.117455  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:15.117462  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:15.117521  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:15.143475  303437 cri.go:89] found id: ""
	I1210 07:09:15.143498  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.143507  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:15.143514  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:15.143580  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:15.168329  303437 cri.go:89] found id: ""
	I1210 07:09:15.168353  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.168363  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:15.168370  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:15.168439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:15.196848  303437 cri.go:89] found id: ""
	I1210 07:09:15.196870  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.196879  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:15.196889  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:15.196901  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:15.210071  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:15.210098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:15.270835  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:15.262938    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.263645    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265180    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265486    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.267063    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:15.262938    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.263645    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265180    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265486    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.267063    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:15.270858  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:15.270870  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:15.296738  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:15.296774  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:15.322760  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:15.322786  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:17.877564  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:17.887770  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:17.887840  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:17.923653  303437 cri.go:89] found id: ""
	I1210 07:09:17.923691  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.923701  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:17.923708  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:17.923789  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:17.953013  303437 cri.go:89] found id: ""
	I1210 07:09:17.953058  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.953067  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:17.953073  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:17.953155  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:17.987520  303437 cri.go:89] found id: ""
	I1210 07:09:17.987565  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.987574  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:17.987587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:17.987655  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:18.017344  303437 cri.go:89] found id: ""
	I1210 07:09:18.017367  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.017378  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:18.017385  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:18.017448  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:18.043560  303437 cri.go:89] found id: ""
	I1210 07:09:18.043592  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.043602  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:18.043609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:18.043670  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:18.071253  303437 cri.go:89] found id: ""
	I1210 07:09:18.071299  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.071308  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:18.071317  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:18.071395  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:18.100328  303437 cri.go:89] found id: ""
	I1210 07:09:18.100350  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.100359  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:18.100364  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:18.100422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:18.124828  303437 cri.go:89] found id: ""
	I1210 07:09:18.124855  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.124864  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:18.124873  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:18.124906  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:18.180441  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:18.180473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:18.193811  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:18.193838  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:18.254675  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:18.247379    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.248083    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.249676    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.250042    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.251523    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:18.247379    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.248083    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.249676    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.250042    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.251523    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:18.254700  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:18.254720  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:18.280133  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:18.280167  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:20.813863  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:20.824103  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:20.824175  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:20.847793  303437 cri.go:89] found id: ""
	I1210 07:09:20.847818  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.847827  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:20.847833  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:20.847896  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:20.873295  303437 cri.go:89] found id: ""
	I1210 07:09:20.873319  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.873328  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:20.873334  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:20.873394  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:20.897570  303437 cri.go:89] found id: ""
	I1210 07:09:20.897594  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.897603  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:20.897609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:20.897665  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:20.932999  303437 cri.go:89] found id: ""
	I1210 07:09:20.933025  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.933034  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:20.933041  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:20.933099  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:20.967096  303437 cri.go:89] found id: ""
	I1210 07:09:20.967123  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.967137  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:20.967143  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:20.967203  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:20.994239  303437 cri.go:89] found id: ""
	I1210 07:09:20.994265  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.994274  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:20.994281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:20.994337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:21.020205  303437 cri.go:89] found id: ""
	I1210 07:09:21.020230  303437 logs.go:282] 0 containers: []
	W1210 07:09:21.020238  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:21.020245  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:21.020305  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:21.049401  303437 cri.go:89] found id: ""
	I1210 07:09:21.049427  303437 logs.go:282] 0 containers: []
	W1210 07:09:21.049436  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:21.049445  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:21.049457  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:21.062901  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:21.062926  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:21.122517  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:21.115640    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.116120    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117591    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117985    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.119485    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:21.115640    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.116120    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117591    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117985    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.119485    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:21.122537  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:21.122550  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:21.147196  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:21.147230  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:21.177192  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:21.177221  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:23.732133  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:23.742890  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:23.742961  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:23.774220  303437 cri.go:89] found id: ""
	I1210 07:09:23.774243  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.774251  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:23.774257  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:23.774317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:23.798816  303437 cri.go:89] found id: ""
	I1210 07:09:23.798837  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.798846  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:23.798852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:23.798910  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:23.823244  303437 cri.go:89] found id: ""
	I1210 07:09:23.823318  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.823341  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:23.823362  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:23.823453  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:23.851474  303437 cri.go:89] found id: ""
	I1210 07:09:23.851500  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.851510  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:23.851516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:23.851598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:23.876565  303437 cri.go:89] found id: ""
	I1210 07:09:23.876641  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.876665  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:23.876679  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:23.876753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:23.901598  303437 cri.go:89] found id: ""
	I1210 07:09:23.901624  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.901632  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:23.901641  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:23.901698  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:23.939880  303437 cri.go:89] found id: ""
	I1210 07:09:23.945774  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.945837  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:23.945917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:23.946105  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:23.983936  303437 cri.go:89] found id: ""
	I1210 07:09:23.984019  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.984045  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:23.984096  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:23.984128  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:24.047417  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:24.047454  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:24.060782  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:24.060808  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:24.123547  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:24.115234    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.115940    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.117664    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.118067    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.119622    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:24.115234    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.115940    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.117664    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.118067    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.119622    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:24.123570  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:24.123583  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:24.148767  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:24.148802  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:26.679138  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:26.691239  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:26.691311  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:26.720725  303437 cri.go:89] found id: ""
	I1210 07:09:26.720748  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.720756  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:26.720763  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:26.720824  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:26.745903  303437 cri.go:89] found id: ""
	I1210 07:09:26.745926  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.745935  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:26.745941  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:26.745999  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:26.771250  303437 cri.go:89] found id: ""
	I1210 07:09:26.771279  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.771289  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:26.771295  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:26.771354  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:26.795771  303437 cri.go:89] found id: ""
	I1210 07:09:26.795795  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.795804  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:26.795810  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:26.795912  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:26.820992  303437 cri.go:89] found id: ""
	I1210 07:09:26.821013  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.821023  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:26.821029  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:26.821091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:26.849537  303437 cri.go:89] found id: ""
	I1210 07:09:26.849559  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.849568  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:26.849575  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:26.849631  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:26.882245  303437 cri.go:89] found id: ""
	I1210 07:09:26.882274  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.882284  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:26.882290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:26.882354  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:26.907397  303437 cri.go:89] found id: ""
	I1210 07:09:26.907421  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.907437  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:26.907446  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:26.907457  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:26.945593  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:26.945619  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:27.009478  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:27.009515  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:27.023242  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:27.023268  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:27.088362  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:27.080378    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.081206    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.082792    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.083225    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.084935    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:27.080378    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.081206    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.082792    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.083225    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.084935    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:27.088384  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:27.088396  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:29.614457  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:29.624717  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:29.624839  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:29.648905  303437 cri.go:89] found id: ""
	I1210 07:09:29.648929  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.648938  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:29.648944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:29.649031  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:29.693513  303437 cri.go:89] found id: ""
	I1210 07:09:29.693576  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.693597  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:29.693615  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:29.693703  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:29.718997  303437 cri.go:89] found id: ""
	I1210 07:09:29.719090  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.719114  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:29.719132  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:29.719215  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:29.749199  303437 cri.go:89] found id: ""
	I1210 07:09:29.749266  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.749289  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:29.749307  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:29.749402  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:29.774719  303437 cri.go:89] found id: ""
	I1210 07:09:29.774795  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.774819  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:29.774841  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:29.774931  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:29.799913  303437 cri.go:89] found id: ""
	I1210 07:09:29.799977  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.799999  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:29.800017  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:29.800095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:29.823673  303437 cri.go:89] found id: ""
	I1210 07:09:29.823747  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.823769  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:29.823787  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:29.823859  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:29.848157  303437 cri.go:89] found id: ""
	I1210 07:09:29.848188  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.848198  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:29.848208  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:29.848219  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:29.876009  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:29.876037  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:29.932276  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:29.932307  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:29.949872  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:29.949898  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:30.045838  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:30.034181    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.034859    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037026    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037737    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.040198    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:30.034181    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.034859    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037026    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037737    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.040198    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:30.045873  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:30.045888  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:32.576040  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:32.587217  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:32.587298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:32.613690  303437 cri.go:89] found id: ""
	I1210 07:09:32.613713  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.613722  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:32.613729  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:32.613797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:32.639153  303437 cri.go:89] found id: ""
	I1210 07:09:32.639178  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.639187  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:32.639193  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:32.639256  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:32.673727  303437 cri.go:89] found id: ""
	I1210 07:09:32.673799  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.673808  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:32.673815  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:32.673882  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:32.709195  303437 cri.go:89] found id: ""
	I1210 07:09:32.709222  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.709231  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:32.709238  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:32.709298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:32.737425  303437 cri.go:89] found id: ""
	I1210 07:09:32.737458  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.737467  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:32.737474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:32.737532  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:32.766042  303437 cri.go:89] found id: ""
	I1210 07:09:32.766069  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.766078  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:32.766086  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:32.766145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:32.791060  303437 cri.go:89] found id: ""
	I1210 07:09:32.791089  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.791098  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:32.791104  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:32.791164  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:32.815424  303437 cri.go:89] found id: ""
	I1210 07:09:32.815445  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.815453  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:32.815462  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:32.815473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:32.845676  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:32.845718  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:32.877898  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:32.877927  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:32.934870  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:32.934903  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:32.950436  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:32.950516  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:33.023900  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:33.014878    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.015644    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.017336    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.018088    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.019810    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:33.014878    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.015644    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.017336    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.018088    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.019810    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:35.524178  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:35.535098  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:35.535173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:35.563582  303437 cri.go:89] found id: ""
	I1210 07:09:35.563606  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.563614  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:35.563621  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:35.563682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:35.589346  303437 cri.go:89] found id: ""
	I1210 07:09:35.589368  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.589377  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:35.589384  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:35.589442  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:35.613807  303437 cri.go:89] found id: ""
	I1210 07:09:35.613833  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.613841  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:35.613848  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:35.613907  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:35.643139  303437 cri.go:89] found id: ""
	I1210 07:09:35.643162  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.643172  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:35.643178  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:35.643240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:35.682597  303437 cri.go:89] found id: ""
	I1210 07:09:35.682629  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.682638  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:35.682645  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:35.682711  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:35.716718  303437 cri.go:89] found id: ""
	I1210 07:09:35.716739  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.716747  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:35.716753  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:35.716811  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:35.746357  303437 cri.go:89] found id: ""
	I1210 07:09:35.746378  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.746387  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:35.746393  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:35.746455  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:35.773219  303437 cri.go:89] found id: ""
	I1210 07:09:35.773240  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.773251  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:35.773260  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:35.773273  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:35.838850  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:35.830993    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.831530    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833193    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833865    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.835553    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:35.830993    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.831530    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833193    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833865    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.835553    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:35.838868  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:35.838882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:35.864265  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:35.864299  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:35.892689  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:35.892716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:35.952281  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:35.952311  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:38.468021  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:38.478500  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:38.478574  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:38.505131  303437 cri.go:89] found id: ""
	I1210 07:09:38.505156  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.505174  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:38.505197  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:38.505267  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:38.529142  303437 cri.go:89] found id: ""
	I1210 07:09:38.529166  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.529175  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:38.529181  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:38.529239  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:38.554410  303437 cri.go:89] found id: ""
	I1210 07:09:38.554434  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.554442  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:38.554449  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:38.554506  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:38.581372  303437 cri.go:89] found id: ""
	I1210 07:09:38.581395  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.581403  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:38.581409  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:38.581472  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:38.606157  303437 cri.go:89] found id: ""
	I1210 07:09:38.606182  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.606191  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:38.606198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:38.606261  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:38.630691  303437 cri.go:89] found id: ""
	I1210 07:09:38.630717  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.630725  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:38.630731  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:38.630788  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:38.655423  303437 cri.go:89] found id: ""
	I1210 07:09:38.655447  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.655456  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:38.655463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:38.655524  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:38.685788  303437 cri.go:89] found id: ""
	I1210 07:09:38.685814  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.685822  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:38.685832  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:38.685844  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:38.750704  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:38.750740  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:38.764389  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:38.764417  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:38.825803  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:38.818667    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.819289    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.820727    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.821107    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.822534    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:38.818667    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.819289    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.820727    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.821107    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.822534    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:38.825824  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:38.825836  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:38.850907  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:38.850941  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:41.382590  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:41.392996  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:41.393069  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:41.417044  303437 cri.go:89] found id: ""
	I1210 07:09:41.417069  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.417077  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:41.417083  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:41.417146  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:41.442003  303437 cri.go:89] found id: ""
	I1210 07:09:41.442077  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.442107  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:41.442127  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:41.442200  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:41.466958  303437 cri.go:89] found id: ""
	I1210 07:09:41.466985  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.466994  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:41.467000  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:41.467081  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:41.491996  303437 cri.go:89] found id: ""
	I1210 07:09:41.492018  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.492027  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:41.492033  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:41.492093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:41.517865  303437 cri.go:89] found id: ""
	I1210 07:09:41.517890  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.517908  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:41.517929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:41.518012  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:41.544162  303437 cri.go:89] found id: ""
	I1210 07:09:41.544184  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.544193  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:41.544199  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:41.544259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:41.573308  303437 cri.go:89] found id: ""
	I1210 07:09:41.573381  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.573404  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:41.573422  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:41.573502  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:41.602427  303437 cri.go:89] found id: ""
	I1210 07:09:41.602457  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.602467  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:41.602492  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:41.602511  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:41.658769  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:41.658803  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:41.681233  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:41.681259  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:41.747373  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:41.738699    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.739334    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.741375    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.742059    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.744132    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:41.738699    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.739334    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.741375    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.742059    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.744132    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:41.747398  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:41.747411  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:41.772193  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:41.772224  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:44.302640  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:44.313058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:44.313127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:44.341886  303437 cri.go:89] found id: ""
	I1210 07:09:44.341914  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.341929  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:44.341935  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:44.341995  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:44.367439  303437 cri.go:89] found id: ""
	I1210 07:09:44.367460  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.367469  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:44.367475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:44.367532  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:44.391640  303437 cri.go:89] found id: ""
	I1210 07:09:44.391668  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.391678  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:44.391685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:44.391780  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:44.421140  303437 cri.go:89] found id: ""
	I1210 07:09:44.421169  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.421178  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:44.421185  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:44.421263  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:44.444759  303437 cri.go:89] found id: ""
	I1210 07:09:44.444783  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.444792  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:44.444798  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:44.444858  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:44.468926  303437 cri.go:89] found id: ""
	I1210 07:09:44.468959  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.468968  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:44.468978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:44.469045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:44.495556  303437 cri.go:89] found id: ""
	I1210 07:09:44.495581  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.495590  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:44.495597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:44.495676  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:44.519631  303437 cri.go:89] found id: ""
	I1210 07:09:44.519654  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.519663  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:44.519672  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:44.519684  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:44.532940  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:44.532964  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:44.598861  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:44.590948    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.591655    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593344    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593846    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.595521    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:44.590948    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.591655    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593344    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593846    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.595521    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:44.598921  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:44.598950  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:44.624141  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:44.624181  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:44.651186  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:44.651214  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:47.208206  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:47.218613  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:47.218695  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:47.244616  303437 cri.go:89] found id: ""
	I1210 07:09:47.244643  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.244652  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:47.244659  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:47.244717  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:47.270353  303437 cri.go:89] found id: ""
	I1210 07:09:47.270378  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.270387  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:47.270393  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:47.270469  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:47.296082  303437 cri.go:89] found id: ""
	I1210 07:09:47.296108  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.296117  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:47.296123  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:47.296181  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:47.320296  303437 cri.go:89] found id: ""
	I1210 07:09:47.320362  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.320380  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:47.320388  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:47.320459  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:47.345546  303437 cri.go:89] found id: ""
	I1210 07:09:47.345571  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.345580  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:47.345587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:47.345647  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:47.375423  303437 cri.go:89] found id: ""
	I1210 07:09:47.375458  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.375467  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:47.375475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:47.375536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:47.399857  303437 cri.go:89] found id: ""
	I1210 07:09:47.399880  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.399894  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:47.399901  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:47.399963  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:47.431984  303437 cri.go:89] found id: ""
	I1210 07:09:47.432011  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.432019  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:47.432029  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:47.432060  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:47.458214  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:47.458248  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:47.490816  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:47.490843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:47.549328  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:47.549361  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:47.562826  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:47.562855  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:47.624764  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:47.617028    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.617678    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619303    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619812    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.621440    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:47.617028    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.617678    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619303    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619812    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.621440    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:50.125980  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:50.136223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:50.136289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:50.169825  303437 cri.go:89] found id: ""
	I1210 07:09:50.169858  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.169867  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:50.169874  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:50.169966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:50.198977  303437 cri.go:89] found id: ""
	I1210 07:09:50.199000  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.199031  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:50.199039  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:50.199095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:50.235780  303437 cri.go:89] found id: ""
	I1210 07:09:50.235803  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.235811  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:50.235817  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:50.235875  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:50.259548  303437 cri.go:89] found id: ""
	I1210 07:09:50.259570  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.259578  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:50.259585  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:50.259641  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:50.285338  303437 cri.go:89] found id: ""
	I1210 07:09:50.285361  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.285369  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:50.285375  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:50.285432  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:50.310647  303437 cri.go:89] found id: ""
	I1210 07:09:50.310669  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.310678  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:50.310685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:50.310741  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:50.334419  303437 cri.go:89] found id: ""
	I1210 07:09:50.334448  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.334458  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:50.334464  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:50.334521  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:50.359803  303437 cri.go:89] found id: ""
	I1210 07:09:50.359827  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.359837  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:50.359847  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:50.359858  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:50.384958  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:50.384994  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:50.421068  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:50.421093  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:50.477375  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:50.477409  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:50.490923  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:50.490954  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:50.556587  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:50.548374    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.549044    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.550820    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.551415    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.553008    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:50.548374    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.549044    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.550820    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.551415    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.553008    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:53.056876  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:53.067392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:53.067464  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:53.092029  303437 cri.go:89] found id: ""
	I1210 07:09:53.092052  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.092062  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:53.092068  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:53.092125  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:53.118131  303437 cri.go:89] found id: ""
	I1210 07:09:53.118156  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.118165  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:53.118172  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:53.118232  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:53.147375  303437 cri.go:89] found id: ""
	I1210 07:09:53.147398  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.147407  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:53.147413  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:53.147471  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:53.184782  303437 cri.go:89] found id: ""
	I1210 07:09:53.184801  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.184810  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:53.184816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:53.184875  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:53.211867  303437 cri.go:89] found id: ""
	I1210 07:09:53.211892  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.211901  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:53.211908  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:53.211965  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:53.237656  303437 cri.go:89] found id: ""
	I1210 07:09:53.237678  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.237686  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:53.237693  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:53.237761  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:53.262840  303437 cri.go:89] found id: ""
	I1210 07:09:53.262861  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.262870  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:53.262876  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:53.262934  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:53.287214  303437 cri.go:89] found id: ""
	I1210 07:09:53.287235  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.287243  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:53.287252  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:53.287265  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:53.316241  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:53.316267  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:53.371646  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:53.371682  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:53.384755  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:53.384788  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:53.447921  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:53.440066    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.440752    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442394    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442882    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.444521    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:53.440066    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.440752    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442394    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442882    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.444521    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:53.447948  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:53.447961  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:55.973173  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:55.983576  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:55.983656  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:56.011801  303437 cri.go:89] found id: ""
	I1210 07:09:56.011830  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.011840  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:56.011851  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:56.011968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:56.038072  303437 cri.go:89] found id: ""
	I1210 07:09:56.038104  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.038114  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:56.038120  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:56.038198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:56.068512  303437 cri.go:89] found id: ""
	I1210 07:09:56.068586  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.068610  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:56.068629  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:56.068716  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:56.094431  303437 cri.go:89] found id: ""
	I1210 07:09:56.094462  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.094471  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:56.094478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:56.094550  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:56.120840  303437 cri.go:89] found id: ""
	I1210 07:09:56.120865  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.120875  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:56.120881  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:56.120957  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:56.145302  303437 cri.go:89] found id: ""
	I1210 07:09:56.145335  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.145344  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:56.145350  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:56.145415  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:56.177802  303437 cri.go:89] found id: ""
	I1210 07:09:56.177828  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.177837  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:56.177843  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:56.177903  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:56.217508  303437 cri.go:89] found id: ""
	I1210 07:09:56.217535  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.217544  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:56.217553  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:56.217565  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:56.236388  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:56.236414  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:56.299818  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:56.290345    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.291927    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.293053    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.294824    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.295281    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:56.290345    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.291927    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.293053    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.294824    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.295281    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:56.299836  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:56.299849  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:56.324241  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:56.324274  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:56.351770  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:56.351798  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:58.907151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:58.920281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:58.920355  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:58.951789  303437 cri.go:89] found id: ""
	I1210 07:09:58.951887  303437 logs.go:282] 0 containers: []
	W1210 07:09:58.951924  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:58.951955  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:58.952033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:58.988101  303437 cri.go:89] found id: ""
	I1210 07:09:58.988174  303437 logs.go:282] 0 containers: []
	W1210 07:09:58.988200  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:58.988214  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:58.988289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:59.015007  303437 cri.go:89] found id: ""
	I1210 07:09:59.015061  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.015070  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:59.015076  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:59.015145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:59.041267  303437 cri.go:89] found id: ""
	I1210 07:09:59.041290  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.041299  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:59.041305  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:59.041364  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:59.065295  303437 cri.go:89] found id: ""
	I1210 07:09:59.065317  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.065325  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:59.065332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:59.065389  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:59.090688  303437 cri.go:89] found id: ""
	I1210 07:09:59.090710  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.090719  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:59.090735  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:59.090796  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:59.123411  303437 cri.go:89] found id: ""
	I1210 07:09:59.123433  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.123442  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:59.123448  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:59.123507  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:59.148970  303437 cri.go:89] found id: ""
	I1210 07:09:59.148995  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.149003  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:59.149013  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:59.149024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:59.213078  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:59.213112  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:59.229582  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:59.229610  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:59.291341  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:59.283620    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.284364    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.285965    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.286418    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.288009    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:59.283620    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.284364    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.285965    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.286418    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.288009    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:59.291371  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:59.291383  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:59.316302  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:59.316335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:01.843334  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:01.854638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:01.854715  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:01.880761  303437 cri.go:89] found id: ""
	I1210 07:10:01.880783  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.880792  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:01.880802  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:01.880863  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:01.910547  303437 cri.go:89] found id: ""
	I1210 07:10:01.910582  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.910591  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:01.910597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:01.910659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:01.946840  303437 cri.go:89] found id: ""
	I1210 07:10:01.946868  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.946878  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:01.946885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:01.946947  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:01.978924  303437 cri.go:89] found id: ""
	I1210 07:10:01.978961  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.978970  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:01.978976  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:01.979080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:02.019488  303437 cri.go:89] found id: ""
	I1210 07:10:02.019517  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.019536  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:02.019543  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:02.019630  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:02.046286  303437 cri.go:89] found id: ""
	I1210 07:10:02.046307  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.046319  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:02.046325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:02.046390  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:02.072527  303437 cri.go:89] found id: ""
	I1210 07:10:02.072552  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.072562  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:02.072568  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:02.072631  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:02.097399  303437 cri.go:89] found id: ""
	I1210 07:10:02.097421  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.097430  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:02.097440  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:02.097451  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:02.158615  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:02.158651  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:02.174600  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:02.174685  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:02.250555  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:02.241608    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.242681    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.244544    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.245035    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.246871    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:02.241608    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.242681    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.244544    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.245035    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.246871    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:02.250577  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:02.250590  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:02.276945  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:02.276982  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:04.815961  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:04.826415  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:04.826482  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:04.851192  303437 cri.go:89] found id: ""
	I1210 07:10:04.851217  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.851226  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:04.851233  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:04.851295  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:04.880601  303437 cri.go:89] found id: ""
	I1210 07:10:04.880623  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.880632  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:04.880639  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:04.880700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:04.910922  303437 cri.go:89] found id: ""
	I1210 07:10:04.910944  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.910954  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:04.910960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:04.911053  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:04.945097  303437 cri.go:89] found id: ""
	I1210 07:10:04.945122  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.945131  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:04.945137  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:04.945198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:04.976739  303437 cri.go:89] found id: ""
	I1210 07:10:04.976759  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.976768  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:04.976774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:04.976828  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:05.004094  303437 cri.go:89] found id: ""
	I1210 07:10:05.004126  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.004136  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:05.004143  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:05.004221  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:05.031557  303437 cri.go:89] found id: ""
	I1210 07:10:05.031582  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.031591  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:05.031598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:05.031660  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:05.057223  303437 cri.go:89] found id: ""
	I1210 07:10:05.057245  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.057254  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:05.057264  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:05.057277  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:05.070835  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:05.070868  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:05.134682  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:05.126987    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.127646    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129140    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129598    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.131280    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:05.126987    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.127646    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129140    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129598    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.131280    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:05.134701  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:05.134713  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:05.161896  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:05.161984  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:05.199637  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:05.199661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:07.763534  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:07.773915  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:07.773983  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:07.800754  303437 cri.go:89] found id: ""
	I1210 07:10:07.800778  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.800788  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:07.800794  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:07.800856  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:07.826430  303437 cri.go:89] found id: ""
	I1210 07:10:07.826453  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.826462  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:07.826468  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:07.826527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:07.850496  303437 cri.go:89] found id: ""
	I1210 07:10:07.850517  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.850528  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:07.850534  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:07.850592  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:07.875524  303437 cri.go:89] found id: ""
	I1210 07:10:07.875546  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.875555  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:07.875561  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:07.875622  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:07.905072  303437 cri.go:89] found id: ""
	I1210 07:10:07.905094  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.905103  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:07.905109  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:07.905189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:07.936426  303437 cri.go:89] found id: ""
	I1210 07:10:07.936449  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.936457  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:07.936464  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:07.936527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:07.973539  303437 cri.go:89] found id: ""
	I1210 07:10:07.973618  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.973640  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:07.973659  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:07.973772  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:07.999823  303437 cri.go:89] found id: ""
	I1210 07:10:07.999914  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.999941  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:07.999964  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:08.000003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:08.068982  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:08.060765    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062197    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062643    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064190    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064500    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:08.060765    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062197    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062643    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064190    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064500    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:08.069056  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:08.069079  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:08.094318  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:08.094351  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:08.122292  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:08.122320  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:08.184455  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:08.184505  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:10.701562  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:10.711949  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:10.712015  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:10.737041  303437 cri.go:89] found id: ""
	I1210 07:10:10.737068  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.737078  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:10.737085  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:10.737152  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:10.766737  303437 cri.go:89] found id: ""
	I1210 07:10:10.766759  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.766769  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:10.766775  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:10.766833  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:10.795664  303437 cri.go:89] found id: ""
	I1210 07:10:10.795689  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.795698  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:10.795705  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:10.795763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:10.819880  303437 cri.go:89] found id: ""
	I1210 07:10:10.819908  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.819917  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:10.819924  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:10.819986  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:10.843991  303437 cri.go:89] found id: ""
	I1210 07:10:10.844028  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.844037  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:10.844043  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:10.844121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:10.868988  303437 cri.go:89] found id: ""
	I1210 07:10:10.869010  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.869019  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:10.869025  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:10.869088  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:10.893331  303437 cri.go:89] found id: ""
	I1210 07:10:10.893361  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.893371  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:10.893392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:10.893473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:10.925989  303437 cri.go:89] found id: ""
	I1210 07:10:10.926016  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.926025  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:10.926034  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:10.926045  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:10.951381  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:10.951417  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:10.992523  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:10.992547  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:11.048715  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:11.048751  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:11.062864  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:11.062892  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:11.126862  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:11.117747    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.118147    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.119641    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.120260    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.122142    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:11.117747    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.118147    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.119641    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.120260    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.122142    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:13.627173  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:13.640121  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:13.640189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:13.666074  303437 cri.go:89] found id: ""
	I1210 07:10:13.666097  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.666106  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:13.666112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:13.666172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:13.694979  303437 cri.go:89] found id: ""
	I1210 07:10:13.695001  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.695043  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:13.695051  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:13.695110  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:13.719004  303437 cri.go:89] found id: ""
	I1210 07:10:13.719045  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.719054  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:13.719066  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:13.719128  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:13.743528  303437 cri.go:89] found id: ""
	I1210 07:10:13.743592  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.743614  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:13.743627  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:13.743700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:13.773695  303437 cri.go:89] found id: ""
	I1210 07:10:13.773720  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.773737  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:13.773743  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:13.773802  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:13.797583  303437 cri.go:89] found id: ""
	I1210 07:10:13.797605  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.797614  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:13.797620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:13.797678  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:13.825318  303437 cri.go:89] found id: ""
	I1210 07:10:13.825348  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.825357  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:13.825363  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:13.825420  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:13.853561  303437 cri.go:89] found id: ""
	I1210 07:10:13.853585  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.853594  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:13.853604  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:13.853622  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:13.935926  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:13.915703    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.920870    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.921425    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923146    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923621    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:13.915703    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.920870    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.921425    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923146    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923621    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:13.935954  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:13.935967  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:13.962598  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:13.962630  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:13.990458  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:13.990484  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:14.047843  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:14.047880  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:16.562478  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:16.576152  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:16.576222  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:16.604031  303437 cri.go:89] found id: ""
	I1210 07:10:16.604054  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.604063  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:16.604069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:16.604128  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:16.628609  303437 cri.go:89] found id: ""
	I1210 07:10:16.628631  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.628640  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:16.628658  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:16.628717  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:16.653619  303437 cri.go:89] found id: ""
	I1210 07:10:16.653656  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.653665  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:16.653671  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:16.653756  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:16.682568  303437 cri.go:89] found id: ""
	I1210 07:10:16.682604  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.682613  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:16.682620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:16.682693  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:16.707801  303437 cri.go:89] found id: ""
	I1210 07:10:16.707835  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.707845  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:16.707852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:16.707935  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:16.732620  303437 cri.go:89] found id: ""
	I1210 07:10:16.732688  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.732711  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:16.732728  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:16.732825  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:16.758445  303437 cri.go:89] found id: ""
	I1210 07:10:16.758467  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.758475  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:16.758482  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:16.758539  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:16.783975  303437 cri.go:89] found id: ""
	I1210 07:10:16.784001  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.784010  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:16.784019  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:16.784047  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:16.814022  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:16.814049  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:16.869237  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:16.869269  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:16.882654  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:16.882731  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:16.969042  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:16.957319    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.958047    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.960851    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.961625    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.964373    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:16.957319    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.958047    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.960851    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.961625    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.964373    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:16.969064  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:16.969086  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:19.496234  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:19.506951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:19.507093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:19.530611  303437 cri.go:89] found id: ""
	I1210 07:10:19.530643  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.530652  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:19.530658  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:19.530727  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:19.557799  303437 cri.go:89] found id: ""
	I1210 07:10:19.557835  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.557845  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:19.557852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:19.557920  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:19.582933  303437 cri.go:89] found id: ""
	I1210 07:10:19.582967  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.582976  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:19.582983  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:19.583072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:19.607826  303437 cri.go:89] found id: ""
	I1210 07:10:19.607889  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.607909  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:19.607917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:19.607979  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:19.632512  303437 cri.go:89] found id: ""
	I1210 07:10:19.632580  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.632597  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:19.632604  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:19.632665  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:19.657636  303437 cri.go:89] found id: ""
	I1210 07:10:19.657668  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.657677  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:19.657684  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:19.657765  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:19.682353  303437 cri.go:89] found id: ""
	I1210 07:10:19.682423  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.682456  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:19.682476  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:19.682562  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:19.706488  303437 cri.go:89] found id: ""
	I1210 07:10:19.706549  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.706582  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:19.706606  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:19.706644  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:19.719694  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:19.719721  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:19.784893  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:19.777331    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.777928    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.779521    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.780075    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.781604    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:19.777331    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.777928    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.779521    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.780075    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.781604    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:19.784915  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:19.784928  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:19.809606  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:19.809641  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:19.841622  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:19.841657  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:22.397071  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:22.407225  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:22.407298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:22.443280  303437 cri.go:89] found id: ""
	I1210 07:10:22.443304  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.443313  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:22.443320  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:22.443377  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:22.476100  303437 cri.go:89] found id: ""
	I1210 07:10:22.476121  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.476130  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:22.476136  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:22.476197  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:22.504294  303437 cri.go:89] found id: ""
	I1210 07:10:22.504317  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.504326  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:22.504332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:22.504388  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:22.527983  303437 cri.go:89] found id: ""
	I1210 07:10:22.528006  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.528015  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:22.528028  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:22.528085  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:22.552219  303437 cri.go:89] found id: ""
	I1210 07:10:22.552243  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.552252  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:22.552257  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:22.552314  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:22.576437  303437 cri.go:89] found id: ""
	I1210 07:10:22.576459  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.576469  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:22.576475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:22.576530  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:22.601577  303437 cri.go:89] found id: ""
	I1210 07:10:22.601599  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.601608  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:22.601614  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:22.601671  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:22.625855  303437 cri.go:89] found id: ""
	I1210 07:10:22.625878  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.625889  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:22.625899  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:22.625910  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:22.681686  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:22.681732  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:22.695126  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:22.695154  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:22.758688  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:22.751337    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.752263    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.753775    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.754238    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.755632    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:22.751337    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.752263    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.753775    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.754238    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.755632    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:22.758709  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:22.758722  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:22.783636  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:22.783671  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:25.311139  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:25.321885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:25.321968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:25.346177  303437 cri.go:89] found id: ""
	I1210 07:10:25.346257  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.346280  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:25.346299  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:25.346402  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:25.371678  303437 cri.go:89] found id: ""
	I1210 07:10:25.371751  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.371766  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:25.371773  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:25.371836  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:25.404393  303437 cri.go:89] found id: ""
	I1210 07:10:25.404419  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.404436  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:25.404450  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:25.404528  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:25.439726  303437 cri.go:89] found id: ""
	I1210 07:10:25.439766  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.439779  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:25.439803  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:25.439965  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:25.476965  303437 cri.go:89] found id: ""
	I1210 07:10:25.476998  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.477007  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:25.477018  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:25.477127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:25.502342  303437 cri.go:89] found id: ""
	I1210 07:10:25.502369  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.502378  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:25.502385  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:25.502451  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:25.528396  303437 cri.go:89] found id: ""
	I1210 07:10:25.528423  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.528432  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:25.528439  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:25.528543  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:25.555005  303437 cri.go:89] found id: ""
	I1210 07:10:25.555065  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.555074  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:25.555083  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:25.555095  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:25.568421  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:25.568450  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:25.629120  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:25.622083    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.622649    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624105    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624522    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.626001    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:25.622083    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.622649    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624105    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624522    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.626001    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:25.629143  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:25.629155  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:25.654736  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:25.654768  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:25.685404  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:25.685473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:28.247164  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:28.257638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:28.257709  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:28.283706  303437 cri.go:89] found id: ""
	I1210 07:10:28.283729  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.283738  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:28.283744  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:28.283806  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:28.311304  303437 cri.go:89] found id: ""
	I1210 07:10:28.311327  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.311336  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:28.311342  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:28.311407  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:28.336026  303437 cri.go:89] found id: ""
	I1210 07:10:28.336048  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.336056  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:28.336062  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:28.336121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:28.361333  303437 cri.go:89] found id: ""
	I1210 07:10:28.361354  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.361362  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:28.361369  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:28.361428  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:28.389101  303437 cri.go:89] found id: ""
	I1210 07:10:28.389123  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.389132  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:28.389138  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:28.389196  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:28.422619  303437 cri.go:89] found id: ""
	I1210 07:10:28.422641  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.422649  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:28.422656  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:28.422713  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:28.453144  303437 cri.go:89] found id: ""
	I1210 07:10:28.453217  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.453240  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:28.453260  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:28.453347  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:28.483124  303437 cri.go:89] found id: ""
	I1210 07:10:28.483148  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.483158  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:28.483167  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:28.483178  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:28.496766  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:28.496793  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:28.563971  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:28.556200    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.557736    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.558089    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559546    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559812    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:28.556200    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.557736    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.558089    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559546    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559812    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:28.564003  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:28.564015  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:28.588981  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:28.589012  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:28.617971  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:28.618000  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:31.175214  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:31.187495  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:31.187568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:31.221446  303437 cri.go:89] found id: ""
	I1210 07:10:31.221473  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.221482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:31.221488  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:31.221548  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:31.246343  303437 cri.go:89] found id: ""
	I1210 07:10:31.246377  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.246386  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:31.246392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:31.246459  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:31.270266  303437 cri.go:89] found id: ""
	I1210 07:10:31.270289  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.270303  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:31.270309  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:31.270365  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:31.295166  303437 cri.go:89] found id: ""
	I1210 07:10:31.295190  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.295199  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:31.295219  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:31.295284  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:31.320783  303437 cri.go:89] found id: ""
	I1210 07:10:31.320822  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.320831  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:31.320838  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:31.320902  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:31.344885  303437 cri.go:89] found id: ""
	I1210 07:10:31.344910  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.344919  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:31.344927  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:31.344984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:31.369604  303437 cri.go:89] found id: ""
	I1210 07:10:31.369627  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.369636  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:31.369642  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:31.369700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:31.396633  303437 cri.go:89] found id: ""
	I1210 07:10:31.396654  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.396663  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:31.396672  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:31.396685  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:31.458644  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:31.458678  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:31.474603  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:31.474632  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:31.540901  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:31.533490    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.534340    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.535925    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.536236    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.537709    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:31.533490    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.534340    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.535925    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.536236    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.537709    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:31.540921  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:31.540933  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:31.565730  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:31.565763  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:34.098229  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:34.108967  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:34.109037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:34.137131  303437 cri.go:89] found id: ""
	I1210 07:10:34.137153  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.137162  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:34.137168  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:34.137224  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:34.171468  303437 cri.go:89] found id: ""
	I1210 07:10:34.171489  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.171498  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:34.171504  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:34.171565  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:34.199509  303437 cri.go:89] found id: ""
	I1210 07:10:34.199531  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.199539  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:34.199545  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:34.199603  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:34.230270  303437 cri.go:89] found id: ""
	I1210 07:10:34.230292  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.230301  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:34.230308  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:34.230368  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:34.257508  303437 cri.go:89] found id: ""
	I1210 07:10:34.257529  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.257538  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:34.257544  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:34.257598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:34.285487  303437 cri.go:89] found id: ""
	I1210 07:10:34.285509  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.285517  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:34.285524  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:34.285584  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:34.312438  303437 cri.go:89] found id: ""
	I1210 07:10:34.312460  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.312469  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:34.312475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:34.312535  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:34.336063  303437 cri.go:89] found id: ""
	I1210 07:10:34.336137  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.336152  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:34.336161  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:34.336172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:34.392136  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:34.392168  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:34.405661  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:34.405691  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:34.486073  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:34.478058    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.478642    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480446    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480885    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.482506    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:34.478058    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.478642    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480446    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480885    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.482506    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:34.486096  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:34.486110  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:34.512711  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:34.512745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:37.043733  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:37.054272  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:37.054343  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:37.080616  303437 cri.go:89] found id: ""
	I1210 07:10:37.080640  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.080649  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:37.080656  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:37.080716  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:37.104975  303437 cri.go:89] found id: ""
	I1210 07:10:37.105002  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.105010  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:37.105017  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:37.105077  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:37.128929  303437 cri.go:89] found id: ""
	I1210 07:10:37.128952  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.128960  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:37.128966  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:37.129026  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:37.154538  303437 cri.go:89] found id: ""
	I1210 07:10:37.154561  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.154570  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:37.154577  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:37.154637  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:37.183900  303437 cri.go:89] found id: ""
	I1210 07:10:37.183920  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.183928  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:37.183934  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:37.183994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:37.218659  303437 cri.go:89] found id: ""
	I1210 07:10:37.218681  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.218689  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:37.218696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:37.218758  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:37.243786  303437 cri.go:89] found id: ""
	I1210 07:10:37.243808  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.243817  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:37.243824  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:37.243889  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:37.271822  303437 cri.go:89] found id: ""
	I1210 07:10:37.271847  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.271856  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:37.271865  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:37.271877  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:37.327230  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:37.327261  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:37.340728  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:37.340755  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:37.402472  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:37.395392    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.395764    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397322    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397745    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.399404    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:37.395392    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.395764    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397322    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397745    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.399404    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:37.402534  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:37.402560  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:37.428514  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:37.428587  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:39.957676  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:39.968353  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:39.968422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:39.996461  303437 cri.go:89] found id: ""
	I1210 07:10:39.996487  303437 logs.go:282] 0 containers: []
	W1210 07:10:39.996497  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:39.996504  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:39.996572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:40.052529  303437 cri.go:89] found id: ""
	I1210 07:10:40.052553  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.052563  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:40.052570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:40.052635  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:40.083247  303437 cri.go:89] found id: ""
	I1210 07:10:40.083272  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.083282  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:40.083288  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:40.083349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:40.109171  303437 cri.go:89] found id: ""
	I1210 07:10:40.109195  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.109204  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:40.109211  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:40.109271  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:40.138871  303437 cri.go:89] found id: ""
	I1210 07:10:40.138950  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.138972  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:40.138992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:40.139100  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:40.176299  303437 cri.go:89] found id: ""
	I1210 07:10:40.176335  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.176345  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:40.176352  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:40.176448  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:40.213557  303437 cri.go:89] found id: ""
	I1210 07:10:40.213590  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.213600  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:40.213622  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:40.213706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:40.253605  303437 cri.go:89] found id: ""
	I1210 07:10:40.253639  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.253648  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:40.253658  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:40.253670  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:40.289048  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:40.289076  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:40.348311  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:40.348344  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:40.364207  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:40.364249  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:40.431287  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:40.422606    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.423275    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.424961    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.425595    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.427272    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:40.422606    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.423275    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.424961    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.425595    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.427272    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:40.431309  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:40.431325  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:42.962817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:42.973583  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:42.973714  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:43.004181  303437 cri.go:89] found id: ""
	I1210 07:10:43.004211  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.004222  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:43.004235  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:43.004302  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:43.031231  303437 cri.go:89] found id: ""
	I1210 07:10:43.031252  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.031261  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:43.031267  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:43.031324  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:43.056959  303437 cri.go:89] found id: ""
	I1210 07:10:43.056991  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.057002  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:43.057009  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:43.057072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:43.086361  303437 cri.go:89] found id: ""
	I1210 07:10:43.086393  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.086403  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:43.086413  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:43.086481  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:43.112977  303437 cri.go:89] found id: ""
	I1210 07:10:43.113003  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.113013  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:43.113020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:43.113079  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:43.137716  303437 cri.go:89] found id: ""
	I1210 07:10:43.137740  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.137749  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:43.137755  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:43.137814  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:43.173396  303437 cri.go:89] found id: ""
	I1210 07:10:43.173421  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.173431  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:43.173437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:43.173494  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:43.202828  303437 cri.go:89] found id: ""
	I1210 07:10:43.202852  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.202861  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:43.202871  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:43.202885  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:43.265997  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:43.266036  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:43.281547  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:43.281582  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:43.359532  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:43.352125   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.352633   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354207   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354531   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.356009   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:43.352125   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.352633   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354207   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354531   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.356009   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:43.359554  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:43.359567  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:43.392377  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:43.392433  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:45.942739  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:45.955296  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:45.955374  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:45.984462  303437 cri.go:89] found id: ""
	I1210 07:10:45.984488  303437 logs.go:282] 0 containers: []
	W1210 07:10:45.984497  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:45.984507  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:45.984566  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:46.014873  303437 cri.go:89] found id: ""
	I1210 07:10:46.014898  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.014920  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:46.014928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:46.015038  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:46.044539  303437 cri.go:89] found id: ""
	I1210 07:10:46.044565  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.044574  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:46.044581  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:46.044642  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:46.070950  303437 cri.go:89] found id: ""
	I1210 07:10:46.070975  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.070985  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:46.070992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:46.071091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:46.101134  303437 cri.go:89] found id: ""
	I1210 07:10:46.101160  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.101170  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:46.101176  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:46.101255  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:46.126003  303437 cri.go:89] found id: ""
	I1210 07:10:46.126028  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.126037  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:46.126044  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:46.126103  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:46.152209  303437 cri.go:89] found id: ""
	I1210 07:10:46.152231  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.152239  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:46.152245  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:46.152303  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:46.183764  303437 cri.go:89] found id: ""
	I1210 07:10:46.183786  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.183794  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:46.183803  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:46.183813  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:46.248135  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:46.248173  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:46.262749  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:46.262778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:46.330280  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:46.322629   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.323199   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.324997   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.325371   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.326892   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:46.322629   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.323199   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.324997   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.325371   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.326892   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:46.330302  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:46.330315  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:46.356151  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:46.356184  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:48.884130  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:48.894898  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:48.894989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:48.919239  303437 cri.go:89] found id: ""
	I1210 07:10:48.919266  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.919275  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:48.919282  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:48.919343  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:48.946463  303437 cri.go:89] found id: ""
	I1210 07:10:48.946487  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.946497  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:48.946509  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:48.946569  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:48.971661  303437 cri.go:89] found id: ""
	I1210 07:10:48.971735  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.971757  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:48.971772  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:48.971857  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:48.996435  303437 cri.go:89] found id: ""
	I1210 07:10:48.996457  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.996466  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:48.996472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:48.996539  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:49.023269  303437 cri.go:89] found id: ""
	I1210 07:10:49.023296  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.023305  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:49.023311  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:49.023371  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:49.052018  303437 cri.go:89] found id: ""
	I1210 07:10:49.052042  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.052051  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:49.052058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:49.052125  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:49.076866  303437 cri.go:89] found id: ""
	I1210 07:10:49.076929  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.076943  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:49.076951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:49.077009  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:49.105029  303437 cri.go:89] found id: ""
	I1210 07:10:49.105051  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.105061  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:49.105070  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:49.105081  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:49.161025  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:49.161103  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:49.176997  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:49.177065  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:49.246287  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:49.238608   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.239236   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.240909   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.241468   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.243158   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:49.238608   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.239236   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.240909   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.241468   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.243158   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:49.246359  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:49.246386  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:49.271827  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:49.271865  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:51.801611  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:51.812172  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:51.812240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:51.836841  303437 cri.go:89] found id: ""
	I1210 07:10:51.836864  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.836874  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:51.836880  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:51.836942  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:51.860730  303437 cri.go:89] found id: ""
	I1210 07:10:51.860754  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.860764  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:51.860770  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:51.860831  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:51.885358  303437 cri.go:89] found id: ""
	I1210 07:10:51.885379  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.885388  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:51.885394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:51.885452  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:51.909974  303437 cri.go:89] found id: ""
	I1210 07:10:51.910038  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.910062  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:51.910080  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:51.910152  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:51.938488  303437 cri.go:89] found id: ""
	I1210 07:10:51.938553  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.938577  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:51.938596  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:51.938669  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:51.964789  303437 cri.go:89] found id: ""
	I1210 07:10:51.964821  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.964831  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:51.964837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:51.964914  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:51.988457  303437 cri.go:89] found id: ""
	I1210 07:10:51.988478  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.988487  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:51.988493  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:51.988553  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:52.032140  303437 cri.go:89] found id: ""
	I1210 07:10:52.032164  303437 logs.go:282] 0 containers: []
	W1210 07:10:52.032177  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:52.032187  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:52.032198  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:52.058273  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:52.058311  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:52.089897  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:52.089924  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:52.145350  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:52.145387  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:52.162441  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:52.162475  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:52.244944  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:52.233560   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.234752   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.239562   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.240120   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.241681   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:52.233560   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.234752   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.239562   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.240120   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.241681   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:54.746617  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:54.757597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:54.757677  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:54.785180  303437 cri.go:89] found id: ""
	I1210 07:10:54.785205  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.785215  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:54.785222  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:54.785283  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:54.813159  303437 cri.go:89] found id: ""
	I1210 07:10:54.813184  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.813193  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:54.813200  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:54.813258  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:54.840481  303437 cri.go:89] found id: ""
	I1210 07:10:54.840503  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.840512  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:54.840519  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:54.840578  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:54.869478  303437 cri.go:89] found id: ""
	I1210 07:10:54.869500  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.869509  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:54.869516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:54.869573  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:54.892998  303437 cri.go:89] found id: ""
	I1210 07:10:54.893020  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.893028  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:54.893034  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:54.893093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:54.921729  303437 cri.go:89] found id: ""
	I1210 07:10:54.921755  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.921765  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:54.921772  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:54.921838  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:54.946951  303437 cri.go:89] found id: ""
	I1210 07:10:54.946976  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.946985  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:54.946992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:54.947069  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:54.972444  303437 cri.go:89] found id: ""
	I1210 07:10:54.972466  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.972475  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:54.972484  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:54.972502  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:54.997696  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:54.997743  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:55.038495  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:55.038532  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:55.099784  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:55.099825  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:55.115531  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:55.115561  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:55.193319  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:55.183569   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.187128   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188202   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188540   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.190127   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:55.183569   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.187128   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188202   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188540   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.190127   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:57.693558  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:57.704587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:57.704698  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:57.733113  303437 cri.go:89] found id: ""
	I1210 07:10:57.733137  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.733147  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:57.733154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:57.733217  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:57.759697  303437 cri.go:89] found id: ""
	I1210 07:10:57.759721  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.759730  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:57.759736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:57.759813  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:57.785244  303437 cri.go:89] found id: ""
	I1210 07:10:57.785273  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.785282  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:57.785288  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:57.785349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:57.819299  303437 cri.go:89] found id: ""
	I1210 07:10:57.819324  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.819333  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:57.819339  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:57.819397  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:57.843698  303437 cri.go:89] found id: ""
	I1210 07:10:57.843720  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.843729  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:57.843736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:57.843797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:57.867903  303437 cri.go:89] found id: ""
	I1210 07:10:57.867928  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.867938  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:57.867944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:57.868003  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:57.892038  303437 cri.go:89] found id: ""
	I1210 07:10:57.892065  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.892074  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:57.892080  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:57.892144  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:57.917032  303437 cri.go:89] found id: ""
	I1210 07:10:57.917055  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.917064  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:57.917073  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:57.917084  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:57.972772  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:57.972808  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:57.986446  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:57.986475  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:58.053540  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:58.045445   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.046514   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.047201   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.048794   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.049108   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:58.045445   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.046514   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.047201   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.048794   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.049108   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:58.053559  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:58.053572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:58.078999  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:58.079080  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:00.609346  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:00.620922  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:00.620998  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:00.647744  303437 cri.go:89] found id: ""
	I1210 07:11:00.647766  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.647775  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:00.647781  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:00.647838  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:00.685141  303437 cri.go:89] found id: ""
	I1210 07:11:00.685162  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.685171  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:00.685177  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:00.685237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:00.713949  303437 cri.go:89] found id: ""
	I1210 07:11:00.713971  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.713980  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:00.713986  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:00.714045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:00.740428  303437 cri.go:89] found id: ""
	I1210 07:11:00.740453  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.740463  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:00.740471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:00.740531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:00.765430  303437 cri.go:89] found id: ""
	I1210 07:11:00.765455  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.765464  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:00.765471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:00.765529  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:00.790771  303437 cri.go:89] found id: ""
	I1210 07:11:00.790797  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.790806  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:00.790813  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:00.790871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:00.817430  303437 cri.go:89] found id: ""
	I1210 07:11:00.817456  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.817465  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:00.817471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:00.817531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:00.841761  303437 cri.go:89] found id: ""
	I1210 07:11:00.841785  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.841794  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:00.841803  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:00.841817  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:00.855324  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:00.855351  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:00.926358  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:00.918056   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.918855   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.920644   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.921186   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.922891   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:00.918056   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.918855   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.920644   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.921186   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.922891   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:00.926380  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:00.926394  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:00.951644  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:00.951678  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:00.979845  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:00.979875  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:03.540927  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:03.551392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:03.551462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:03.576792  303437 cri.go:89] found id: ""
	I1210 07:11:03.576821  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.576830  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:03.576837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:03.576896  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:03.601193  303437 cri.go:89] found id: ""
	I1210 07:11:03.601216  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.601225  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:03.601233  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:03.601290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:03.626528  303437 cri.go:89] found id: ""
	I1210 07:11:03.626550  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.626559  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:03.626565  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:03.626624  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:03.656106  303437 cri.go:89] found id: ""
	I1210 07:11:03.656128  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.656137  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:03.656149  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:03.656206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:03.691936  303437 cri.go:89] found id: ""
	I1210 07:11:03.691960  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.691970  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:03.691976  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:03.692037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:03.721295  303437 cri.go:89] found id: ""
	I1210 07:11:03.721321  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.721331  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:03.721338  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:03.721409  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:03.750080  303437 cri.go:89] found id: ""
	I1210 07:11:03.750105  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.750114  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:03.750121  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:03.750205  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:03.777748  303437 cri.go:89] found id: ""
	I1210 07:11:03.777771  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.777780  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:03.777815  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:03.777836  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:03.792128  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:03.792159  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:03.859337  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:03.851981   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.852547   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854609   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.856112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:03.851981   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.852547   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854609   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.856112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:03.859358  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:03.859371  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:03.885445  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:03.885482  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:03.915897  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:03.915925  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:06.473632  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:06.484351  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:06.484431  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:06.509957  303437 cri.go:89] found id: ""
	I1210 07:11:06.509982  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.509991  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:06.509997  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:06.510061  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:06.537150  303437 cri.go:89] found id: ""
	I1210 07:11:06.537175  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.537185  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:06.537195  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:06.537255  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:06.571765  303437 cri.go:89] found id: ""
	I1210 07:11:06.571789  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.571798  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:06.571804  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:06.571872  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:06.600905  303437 cri.go:89] found id: ""
	I1210 07:11:06.600928  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.600938  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:06.600944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:06.601007  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:06.625296  303437 cri.go:89] found id: ""
	I1210 07:11:06.625320  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.625329  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:06.625335  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:06.625396  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:06.653467  303437 cri.go:89] found id: ""
	I1210 07:11:06.653490  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.653499  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:06.653505  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:06.653563  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:06.693284  303437 cri.go:89] found id: ""
	I1210 07:11:06.693309  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.693319  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:06.693325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:06.693385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:06.731038  303437 cri.go:89] found id: ""
	I1210 07:11:06.731061  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.731069  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:06.731079  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:06.731091  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:06.744632  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:06.744661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:06.805649  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:06.797284   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.798135   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.799790   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.800106   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.801600   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:06.797284   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.798135   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.799790   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.800106   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.801600   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:06.805675  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:06.805697  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:06.830881  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:06.830917  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:06.859403  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:06.859429  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:09.415956  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:09.428117  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:09.428237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:09.457364  303437 cri.go:89] found id: ""
	I1210 07:11:09.457426  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.457457  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:09.457478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:09.457570  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:09.487281  303437 cri.go:89] found id: ""
	I1210 07:11:09.487343  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.487375  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:09.487395  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:09.487481  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:09.512841  303437 cri.go:89] found id: ""
	I1210 07:11:09.512912  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.512945  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:09.512964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:09.513056  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:09.538740  303437 cri.go:89] found id: ""
	I1210 07:11:09.538824  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.538855  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:09.538885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:09.538979  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:09.566651  303437 cri.go:89] found id: ""
	I1210 07:11:09.566692  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.566718  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:09.566732  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:09.566811  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:09.591707  303437 cri.go:89] found id: ""
	I1210 07:11:09.591782  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.591798  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:09.591808  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:09.591866  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:09.620542  303437 cri.go:89] found id: ""
	I1210 07:11:09.620568  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.620577  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:09.620584  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:09.620642  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:09.649059  303437 cri.go:89] found id: ""
	I1210 07:11:09.649082  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.649091  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:09.649100  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:09.649111  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:09.674480  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:09.674512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:09.715383  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:09.715410  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:09.775480  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:09.775512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:09.788719  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:09.788798  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:09.855981  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:09.848870   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.849294   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.850550   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.851138   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.852720   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:09.848870   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.849294   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.850550   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.851138   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.852720   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:12.356259  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:12.366697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:12.366763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:12.390732  303437 cri.go:89] found id: ""
	I1210 07:11:12.390756  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.390764  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:12.390771  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:12.390826  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:12.430569  303437 cri.go:89] found id: ""
	I1210 07:11:12.430619  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.430631  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:12.430638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:12.430704  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:12.477376  303437 cri.go:89] found id: ""
	I1210 07:11:12.477398  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.477406  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:12.477412  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:12.477483  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:12.503110  303437 cri.go:89] found id: ""
	I1210 07:11:12.503132  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.503140  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:12.503147  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:12.503206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:12.527661  303437 cri.go:89] found id: ""
	I1210 07:11:12.527683  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.527691  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:12.527698  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:12.527757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:12.552603  303437 cri.go:89] found id: ""
	I1210 07:11:12.552624  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.552632  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:12.552639  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:12.552701  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:12.576969  303437 cri.go:89] found id: ""
	I1210 07:11:12.576991  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.576999  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:12.577005  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:12.577074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:12.602537  303437 cri.go:89] found id: ""
	I1210 07:11:12.602559  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.602568  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:12.602577  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:12.602589  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:12.660382  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:12.660462  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:12.675575  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:12.675600  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:12.748937  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:12.741330   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.741988   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.743656   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.744158   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.745748   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:12.741330   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.741988   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.743656   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.744158   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.745748   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:12.748957  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:12.748970  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:12.773717  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:12.773752  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:15.305384  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:15.315713  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:15.315783  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:15.340655  303437 cri.go:89] found id: ""
	I1210 07:11:15.340678  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.340687  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:15.340693  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:15.340757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:15.366091  303437 cri.go:89] found id: ""
	I1210 07:11:15.366115  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.366123  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:15.366130  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:15.366187  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:15.392837  303437 cri.go:89] found id: ""
	I1210 07:11:15.392862  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.392871  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:15.392877  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:15.392939  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:15.435313  303437 cri.go:89] found id: ""
	I1210 07:11:15.435340  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.435349  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:15.435356  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:15.435422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:15.466475  303437 cri.go:89] found id: ""
	I1210 07:11:15.466500  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.466509  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:15.466516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:15.466575  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:15.497149  303437 cri.go:89] found id: ""
	I1210 07:11:15.497175  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.497184  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:15.497191  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:15.497250  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:15.523660  303437 cri.go:89] found id: ""
	I1210 07:11:15.523725  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.523741  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:15.523748  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:15.523808  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:15.547943  303437 cri.go:89] found id: ""
	I1210 07:11:15.547971  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.547987  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:15.547996  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:15.548007  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:15.603029  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:15.603064  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:15.616115  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:15.616150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:15.696616  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:15.686858   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.687579   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689227   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689725   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.693083   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:15.686858   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.687579   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689227   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689725   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.693083   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:15.696637  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:15.696660  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:15.728162  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:15.728212  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:18.262884  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:18.273396  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:18.273467  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:18.298776  303437 cri.go:89] found id: ""
	I1210 07:11:18.298799  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.298809  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:18.298816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:18.298873  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:18.326358  303437 cri.go:89] found id: ""
	I1210 07:11:18.326431  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.326444  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:18.326472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:18.326567  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:18.351094  303437 cri.go:89] found id: ""
	I1210 07:11:18.351116  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.351125  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:18.351132  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:18.351190  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:18.376189  303437 cri.go:89] found id: ""
	I1210 07:11:18.376211  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.376220  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:18.376227  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:18.376283  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:18.400127  303437 cri.go:89] found id: ""
	I1210 07:11:18.400151  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.400160  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:18.400166  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:18.400231  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:18.429089  303437 cri.go:89] found id: ""
	I1210 07:11:18.429160  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.429173  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:18.429181  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:18.429304  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:18.462081  303437 cri.go:89] found id: ""
	I1210 07:11:18.462162  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.462174  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:18.462202  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:18.462289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:18.490007  303437 cri.go:89] found id: ""
	I1210 07:11:18.490081  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.490105  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:18.490128  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:18.490164  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:18.506325  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:18.506400  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:18.582081  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:18.572894   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.573949   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.574774   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.576605   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.577188   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:18.572894   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.573949   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.574774   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.576605   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.577188   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:18.582154  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:18.582194  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:18.608014  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:18.608047  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:18.637797  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:18.637826  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:21.198374  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:21.208690  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:21.208757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:21.235678  303437 cri.go:89] found id: ""
	I1210 07:11:21.235701  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.235710  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:21.235723  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:21.235788  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:21.259648  303437 cri.go:89] found id: ""
	I1210 07:11:21.259671  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.259679  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:21.259685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:21.259742  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:21.284541  303437 cri.go:89] found id: ""
	I1210 07:11:21.284562  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.284571  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:21.284577  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:21.284634  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:21.309347  303437 cri.go:89] found id: ""
	I1210 07:11:21.309371  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.309380  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:21.309386  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:21.309449  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:21.337308  303437 cri.go:89] found id: ""
	I1210 07:11:21.337377  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.337397  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:21.337414  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:21.337498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:21.362600  303437 cri.go:89] found id: ""
	I1210 07:11:21.362622  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.362631  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:21.362637  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:21.362706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:21.386909  303437 cri.go:89] found id: ""
	I1210 07:11:21.386934  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.386951  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:21.386959  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:21.387045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:21.444294  303437 cri.go:89] found id: ""
	I1210 07:11:21.444331  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.444340  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:21.444350  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:21.444361  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:21.537630  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:21.526461   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.527437   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.531792   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.532470   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.534191   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:21.526461   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.527437   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.531792   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.532470   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.534191   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:21.537650  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:21.537744  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:21.567303  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:21.567339  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:21.599305  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:21.599333  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:21.660956  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:21.660989  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:24.197663  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:24.209532  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:24.209604  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:24.235185  303437 cri.go:89] found id: ""
	I1210 07:11:24.235207  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.235215  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:24.235222  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:24.235291  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:24.269486  303437 cri.go:89] found id: ""
	I1210 07:11:24.269507  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.269515  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:24.269522  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:24.269580  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:24.295987  303437 cri.go:89] found id: ""
	I1210 07:11:24.296010  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.296018  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:24.296024  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:24.296080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:24.321843  303437 cri.go:89] found id: ""
	I1210 07:11:24.321918  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.321932  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:24.321939  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:24.322070  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:24.349226  303437 cri.go:89] found id: ""
	I1210 07:11:24.349296  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.349309  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:24.349316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:24.349439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:24.382513  303437 cri.go:89] found id: ""
	I1210 07:11:24.382595  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.382617  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:24.382636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:24.382759  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:24.423211  303437 cri.go:89] found id: ""
	I1210 07:11:24.423284  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.423306  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:24.423325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:24.423413  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:24.483751  303437 cri.go:89] found id: ""
	I1210 07:11:24.483774  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.483783  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:24.483792  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:24.483831  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:24.554712  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:24.547245   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.548134   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.549755   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.550089   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.551593   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:24.547245   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.548134   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.549755   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.550089   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.551593   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:24.554746  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:24.554759  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:24.583135  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:24.583172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:24.621794  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:24.621824  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:24.686891  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:24.686927  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:27.212817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:27.223470  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:27.223540  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:27.250394  303437 cri.go:89] found id: ""
	I1210 07:11:27.250421  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.250431  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:27.250437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:27.250497  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:27.275076  303437 cri.go:89] found id: ""
	I1210 07:11:27.275099  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.275108  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:27.275114  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:27.275175  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:27.300285  303437 cri.go:89] found id: ""
	I1210 07:11:27.300311  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.300321  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:27.300327  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:27.300389  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:27.324870  303437 cri.go:89] found id: ""
	I1210 07:11:27.324894  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.324904  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:27.324910  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:27.324976  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:27.351041  303437 cri.go:89] found id: ""
	I1210 07:11:27.351063  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.351072  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:27.351079  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:27.351145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:27.375920  303437 cri.go:89] found id: ""
	I1210 07:11:27.375942  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.375950  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:27.375957  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:27.376016  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:27.400149  303437 cri.go:89] found id: ""
	I1210 07:11:27.400174  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.400183  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:27.400190  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:27.400248  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:27.436160  303437 cri.go:89] found id: ""
	I1210 07:11:27.436192  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.436201  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:27.436211  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:27.436222  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:27.498671  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:27.498704  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:27.512854  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:27.512880  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:27.582038  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:27.573895   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.574782   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576306   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576889   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.578645   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:27.573895   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.574782   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576306   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576889   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.578645   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:27.582102  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:27.582129  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:27.610246  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:27.610287  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:30.139493  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:30.150290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:30.150358  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:30.176970  303437 cri.go:89] found id: ""
	I1210 07:11:30.177000  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.177008  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:30.177015  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:30.177079  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:30.202200  303437 cri.go:89] found id: ""
	I1210 07:11:30.202226  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.202235  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:30.202241  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:30.202300  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:30.226724  303437 cri.go:89] found id: ""
	I1210 07:11:30.226748  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.226757  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:30.226763  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:30.226825  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:30.251813  303437 cri.go:89] found id: ""
	I1210 07:11:30.251835  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.251844  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:30.251850  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:30.251912  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:30.277078  303437 cri.go:89] found id: ""
	I1210 07:11:30.277099  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.277109  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:30.277115  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:30.277172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:30.305998  303437 cri.go:89] found id: ""
	I1210 07:11:30.306019  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.306027  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:30.306034  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:30.306091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:30.334810  303437 cri.go:89] found id: ""
	I1210 07:11:30.334831  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.334839  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:30.334846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:30.334903  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:30.359892  303437 cri.go:89] found id: ""
	I1210 07:11:30.359913  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.359921  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:30.359930  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:30.359940  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:30.385054  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:30.385088  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:30.421360  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:30.421390  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:30.485019  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:30.485051  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:30.498844  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:30.498916  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:30.560538  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:30.552697   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.553538   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.554465   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.555942   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.556285   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:30.552697   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.553538   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.554465   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.555942   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.556285   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:33.062385  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:33.073083  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:33.073165  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:33.097439  303437 cri.go:89] found id: ""
	I1210 07:11:33.097463  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.097471  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:33.097478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:33.097540  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:33.124732  303437 cri.go:89] found id: ""
	I1210 07:11:33.124754  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.124763  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:33.124769  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:33.124829  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:33.153513  303437 cri.go:89] found id: ""
	I1210 07:11:33.153536  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.153545  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:33.153550  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:33.153610  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:33.179491  303437 cri.go:89] found id: ""
	I1210 07:11:33.179518  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.179526  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:33.179533  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:33.179593  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:33.205039  303437 cri.go:89] found id: ""
	I1210 07:11:33.205232  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.205248  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:33.205255  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:33.205332  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:33.231637  303437 cri.go:89] found id: ""
	I1210 07:11:33.231661  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.231670  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:33.231677  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:33.231740  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:33.257596  303437 cri.go:89] found id: ""
	I1210 07:11:33.257622  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.257630  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:33.257636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:33.257702  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:33.283943  303437 cri.go:89] found id: ""
	I1210 07:11:33.283968  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.283978  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:33.283989  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:33.284003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:33.297130  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:33.297162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:33.358971  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:33.351484   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.351972   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.353499   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.354087   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.355603   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:33.351484   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.351972   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.353499   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.354087   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.355603   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:33.359004  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:33.359053  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:33.383559  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:33.383593  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:33.411160  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:33.411184  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:35.975172  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:35.985598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:35.985677  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:36.012649  303437 cri.go:89] found id: ""
	I1210 07:11:36.012687  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.012698  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:36.012705  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:36.012772  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:36.039233  303437 cri.go:89] found id: ""
	I1210 07:11:36.039301  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.039325  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:36.039344  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:36.039440  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:36.064743  303437 cri.go:89] found id: ""
	I1210 07:11:36.064766  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.064775  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:36.064781  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:36.064839  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:36.088939  303437 cri.go:89] found id: ""
	I1210 07:11:36.088961  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.088969  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:36.088975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:36.089037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:36.116797  303437 cri.go:89] found id: ""
	I1210 07:11:36.116821  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.116830  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:36.116836  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:36.116894  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:36.141419  303437 cri.go:89] found id: ""
	I1210 07:11:36.141447  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.141456  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:36.141463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:36.141525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:36.166138  303437 cri.go:89] found id: ""
	I1210 07:11:36.166165  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.166174  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:36.166180  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:36.166242  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:36.193939  303437 cri.go:89] found id: ""
	I1210 07:11:36.194014  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.194036  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:36.194058  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:36.194096  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:36.250476  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:36.250507  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:36.263989  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:36.264070  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:36.328452  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:36.320900   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.321474   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323175   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323732   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.325316   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:36.320900   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.321474   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323175   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323732   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.325316   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:36.328474  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:36.328487  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:36.353490  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:36.353523  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:38.890866  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:38.901365  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:38.901464  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:38.932423  303437 cri.go:89] found id: ""
	I1210 07:11:38.932450  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.932458  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:38.932465  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:38.932525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:38.959879  303437 cri.go:89] found id: ""
	I1210 07:11:38.959907  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.959915  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:38.959921  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:38.959978  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:38.986312  303437 cri.go:89] found id: ""
	I1210 07:11:38.986338  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.986347  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:38.986353  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:38.986410  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:39.011808  303437 cri.go:89] found id: ""
	I1210 07:11:39.011830  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.011839  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:39.011845  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:39.011908  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:39.037634  303437 cri.go:89] found id: ""
	I1210 07:11:39.037675  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.037685  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:39.037691  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:39.037763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:39.062989  303437 cri.go:89] found id: ""
	I1210 07:11:39.063073  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.063096  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:39.063114  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:39.063200  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:39.092710  303437 cri.go:89] found id: ""
	I1210 07:11:39.092732  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.092740  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:39.092749  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:39.092809  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:39.116692  303437 cri.go:89] found id: ""
	I1210 07:11:39.116715  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.116724  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:39.116735  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:39.116745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:39.173134  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:39.173165  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:39.187543  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:39.187619  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:39.248942  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:39.241567   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.242335   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.243953   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.244270   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.245769   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:39.241567   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.242335   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.243953   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.244270   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.245769   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:39.248964  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:39.248976  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:39.273536  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:39.273572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:41.801091  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:41.812394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:41.812473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:41.838936  303437 cri.go:89] found id: ""
	I1210 07:11:41.839028  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.839042  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:41.839050  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:41.839131  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:41.864566  303437 cri.go:89] found id: ""
	I1210 07:11:41.864593  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.864603  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:41.864609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:41.864673  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:41.889296  303437 cri.go:89] found id: ""
	I1210 07:11:41.889321  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.889330  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:41.889337  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:41.889396  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:41.915562  303437 cri.go:89] found id: ""
	I1210 07:11:41.915589  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.915601  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:41.915608  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:41.915670  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:41.953369  303437 cri.go:89] found id: ""
	I1210 07:11:41.953395  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.953404  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:41.953410  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:41.953473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:41.985179  303437 cri.go:89] found id: ""
	I1210 07:11:41.985205  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.985216  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:41.985223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:41.985327  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:42.015327  303437 cri.go:89] found id: ""
	I1210 07:11:42.015400  303437 logs.go:282] 0 containers: []
	W1210 07:11:42.015424  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:42.015443  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:42.015541  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:42.043382  303437 cri.go:89] found id: ""
	I1210 07:11:42.043407  303437 logs.go:282] 0 containers: []
	W1210 07:11:42.043421  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:42.043431  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:42.043443  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:42.080163  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:42.080196  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:42.139896  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:42.139935  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:42.156701  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:42.156737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:42.234579  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:42.225807   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.226427   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228068   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228448   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.229958   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:42.225807   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.226427   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228068   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228448   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.229958   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:42.234662  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:42.234691  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:44.763362  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:44.773978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:44.774048  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:44.799637  303437 cri.go:89] found id: ""
	I1210 07:11:44.799665  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.799674  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:44.799680  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:44.799741  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:44.827772  303437 cri.go:89] found id: ""
	I1210 07:11:44.827797  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.827806  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:44.827812  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:44.827871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:44.851977  303437 cri.go:89] found id: ""
	I1210 07:11:44.852005  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.852014  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:44.852020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:44.852080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:44.876554  303437 cri.go:89] found id: ""
	I1210 07:11:44.876580  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.876590  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:44.876596  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:44.876658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:44.903100  303437 cri.go:89] found id: ""
	I1210 07:11:44.903132  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.903141  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:44.903154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:44.903215  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:44.933312  303437 cri.go:89] found id: ""
	I1210 07:11:44.933333  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.933342  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:44.933348  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:44.933407  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:44.969458  303437 cri.go:89] found id: ""
	I1210 07:11:44.969530  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.969552  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:44.969569  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:44.969666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:45.013288  303437 cri.go:89] found id: ""
	I1210 07:11:45.013381  303437 logs.go:282] 0 containers: []
	W1210 07:11:45.013403  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:45.013427  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:45.013468  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:45.111594  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:45.112597  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:45.131602  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:45.131636  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:45.220807  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:45.205512   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.206557   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.208854   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.209321   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.215241   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:45.205512   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.206557   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.208854   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.209321   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.215241   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:45.220830  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:45.220843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:45.257708  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:45.257752  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:47.792395  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:47.802865  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:47.802937  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:47.832152  303437 cri.go:89] found id: ""
	I1210 07:11:47.832175  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.832191  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:47.832198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:47.832262  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:47.856843  303437 cri.go:89] found id: ""
	I1210 07:11:47.856868  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.856877  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:47.856883  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:47.856943  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:47.880564  303437 cri.go:89] found id: ""
	I1210 07:11:47.880586  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.880595  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:47.880601  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:47.880658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:47.908243  303437 cri.go:89] found id: ""
	I1210 07:11:47.908264  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.908273  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:47.908280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:47.908337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:47.951940  303437 cri.go:89] found id: ""
	I1210 07:11:47.951961  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.951969  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:47.951975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:47.952033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:47.986418  303437 cri.go:89] found id: ""
	I1210 07:11:47.986437  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.986446  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:47.986452  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:47.986511  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:48.018032  303437 cri.go:89] found id: ""
	I1210 07:11:48.018055  303437 logs.go:282] 0 containers: []
	W1210 07:11:48.018064  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:48.018069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:48.018131  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:48.045010  303437 cri.go:89] found id: ""
	I1210 07:11:48.045033  303437 logs.go:282] 0 containers: []
	W1210 07:11:48.045043  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:48.045052  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:48.045063  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:48.070773  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:48.070806  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:48.100419  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:48.100451  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:48.157253  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:48.157287  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:48.171891  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:48.171922  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:48.236843  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:48.228588   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.229356   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231115   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231743   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.233451   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:48.228588   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.229356   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231115   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231743   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.233451   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:50.738489  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:50.749165  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:50.749232  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:50.774993  303437 cri.go:89] found id: ""
	I1210 07:11:50.775032  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.775042  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:50.775049  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:50.775108  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:50.800355  303437 cri.go:89] found id: ""
	I1210 07:11:50.800380  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.800389  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:50.800396  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:50.800455  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:50.825116  303437 cri.go:89] found id: ""
	I1210 07:11:50.825139  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.825148  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:50.825154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:50.825216  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:50.852419  303437 cri.go:89] found id: ""
	I1210 07:11:50.852441  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.852449  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:50.852455  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:50.852513  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:50.877502  303437 cri.go:89] found id: ""
	I1210 07:11:50.877522  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.877531  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:50.877537  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:50.877594  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:50.905139  303437 cri.go:89] found id: ""
	I1210 07:11:50.905161  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.905171  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:50.905177  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:50.905237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:50.933267  303437 cri.go:89] found id: ""
	I1210 07:11:50.933291  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.933299  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:50.933305  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:50.933364  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:50.961246  303437 cri.go:89] found id: ""
	I1210 07:11:50.961267  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.961276  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:50.961285  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:50.961296  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:50.989123  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:50.989149  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:51.046128  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:51.046168  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:51.060977  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:51.061014  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:51.126917  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:51.119326   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.119907   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.121515   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.122000   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.123466   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:51.119326   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.119907   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.121515   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.122000   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.123466   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:51.126938  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:51.126951  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:53.652260  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:53.662761  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:53.662827  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:53.692655  303437 cri.go:89] found id: ""
	I1210 07:11:53.692728  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.692755  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:53.692773  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:53.692852  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:53.726710  303437 cri.go:89] found id: ""
	I1210 07:11:53.726743  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.726752  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:53.726758  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:53.726816  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:53.751772  303437 cri.go:89] found id: ""
	I1210 07:11:53.751793  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.751802  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:53.751808  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:53.751867  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:53.776281  303437 cri.go:89] found id: ""
	I1210 07:11:53.776347  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.776371  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:53.776391  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:53.776475  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:53.801234  303437 cri.go:89] found id: ""
	I1210 07:11:53.801259  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.801268  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:53.801275  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:53.801330  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:53.830240  303437 cri.go:89] found id: ""
	I1210 07:11:53.830265  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.830273  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:53.830280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:53.830341  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:53.855035  303437 cri.go:89] found id: ""
	I1210 07:11:53.855059  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.855069  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:53.855075  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:53.855140  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:53.883359  303437 cri.go:89] found id: ""
	I1210 07:11:53.883384  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.883401  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:53.883411  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:53.883423  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:53.923136  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:53.923215  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:53.985138  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:53.985172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:53.999740  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:53.999775  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:54.066156  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:54.058409   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.059038   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.060575   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.061111   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.062782   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:54.058409   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.059038   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.060575   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.061111   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.062782   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:54.066181  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:54.066194  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:56.591475  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:56.601960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:56.602033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:56.626286  303437 cri.go:89] found id: ""
	I1210 07:11:56.626311  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.626320  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:56.626327  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:56.626385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:56.650098  303437 cri.go:89] found id: ""
	I1210 07:11:56.650124  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.650133  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:56.650139  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:56.650201  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:56.677542  303437 cri.go:89] found id: ""
	I1210 07:11:56.677569  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.677578  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:56.677584  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:56.677659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:56.709405  303437 cri.go:89] found id: ""
	I1210 07:11:56.709430  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.709439  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:56.709446  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:56.709508  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:56.739179  303437 cri.go:89] found id: ""
	I1210 07:11:56.739204  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.739212  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:56.739219  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:56.739277  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:56.766584  303437 cri.go:89] found id: ""
	I1210 07:11:56.766609  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.766618  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:56.766624  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:56.766691  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:56.791703  303437 cri.go:89] found id: ""
	I1210 07:11:56.791729  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.791739  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:56.791745  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:56.791809  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:56.817298  303437 cri.go:89] found id: ""
	I1210 07:11:56.817325  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.817334  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:56.817344  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:56.817355  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:56.875173  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:56.875210  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:56.889120  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:56.889146  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:56.984238  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:56.976825   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.977286   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.978762   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.979412   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.980881   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:56.976825   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.977286   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.978762   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.979412   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.980881   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:56.984258  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:56.984270  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:57.011593  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:57.011627  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:59.548660  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:59.559203  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:59.559272  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:59.584024  303437 cri.go:89] found id: ""
	I1210 07:11:59.584091  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.584113  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:59.584131  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:59.584223  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:59.609283  303437 cri.go:89] found id: ""
	I1210 07:11:59.609307  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.609316  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:59.609325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:59.609385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:59.633912  303437 cri.go:89] found id: ""
	I1210 07:11:59.633935  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.633944  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:59.633951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:59.634012  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:59.660339  303437 cri.go:89] found id: ""
	I1210 07:11:59.660365  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.660373  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:59.660380  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:59.660437  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:59.697302  303437 cri.go:89] found id: ""
	I1210 07:11:59.697329  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.697342  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:59.697348  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:59.697410  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:59.733379  303437 cri.go:89] found id: ""
	I1210 07:11:59.733402  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.733411  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:59.733418  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:59.733488  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:59.758324  303437 cri.go:89] found id: ""
	I1210 07:11:59.758350  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.758360  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:59.758366  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:59.758423  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:59.788265  303437 cri.go:89] found id: ""
	I1210 07:11:59.788304  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.788313  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:59.788323  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:59.788335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:59.816310  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:59.816335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:59.875191  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:59.875227  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:59.888706  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:59.888737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:59.964581  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:59.957369   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.958105   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959640   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959946   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.961394   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:59.957369   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.958105   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959640   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959946   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.961394   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:59.964604  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:59.964617  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:02.490529  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:02.501579  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:02.501655  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:02.530852  303437 cri.go:89] found id: ""
	I1210 07:12:02.530876  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.530885  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:02.530894  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:02.530955  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:02.561336  303437 cri.go:89] found id: ""
	I1210 07:12:02.561361  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.561370  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:02.561377  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:02.561434  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:02.585933  303437 cri.go:89] found id: ""
	I1210 07:12:02.585963  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.585972  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:02.585979  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:02.586040  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:02.611097  303437 cri.go:89] found id: ""
	I1210 07:12:02.611122  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.611131  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:02.611137  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:02.611199  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:02.637900  303437 cri.go:89] found id: ""
	I1210 07:12:02.637925  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.637934  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:02.637941  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:02.638002  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:02.669431  303437 cri.go:89] found id: ""
	I1210 07:12:02.669457  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.669467  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:02.669474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:02.669536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:02.704940  303437 cri.go:89] found id: ""
	I1210 07:12:02.704967  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.704976  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:02.704983  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:02.705044  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:02.733218  303437 cri.go:89] found id: ""
	I1210 07:12:02.733241  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.733251  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:02.733260  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:02.733271  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:02.791544  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:02.791580  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:02.805689  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:02.805716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:02.873516  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:02.865755   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.866425   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868077   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868380   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.870008   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:02.865755   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.866425   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868077   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868380   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.870008   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:02.873536  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:02.873548  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:02.898899  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:02.898932  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:05.445135  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:05.455827  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:05.455898  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:05.481329  303437 cri.go:89] found id: ""
	I1210 07:12:05.481352  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.481363  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:05.481370  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:05.481428  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:05.507339  303437 cri.go:89] found id: ""
	I1210 07:12:05.507362  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.507371  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:05.507378  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:05.507444  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:05.531971  303437 cri.go:89] found id: ""
	I1210 07:12:05.531995  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.532004  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:05.532010  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:05.532074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:05.563046  303437 cri.go:89] found id: ""
	I1210 07:12:05.563069  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.563078  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:05.563084  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:05.563147  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:05.587778  303437 cri.go:89] found id: ""
	I1210 07:12:05.587801  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.587810  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:05.587816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:05.587874  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:05.611952  303437 cri.go:89] found id: ""
	I1210 07:12:05.611973  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.611982  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:05.611988  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:05.612047  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:05.636683  303437 cri.go:89] found id: ""
	I1210 07:12:05.636705  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.636715  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:05.636721  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:05.636781  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:05.674580  303437 cri.go:89] found id: ""
	I1210 07:12:05.674609  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.674619  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:05.674628  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:05.674640  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:05.690150  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:05.690176  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:05.761058  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:05.753113   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.753757   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.755406   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.756072   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.757763   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:05.753113   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.753757   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.755406   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.756072   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.757763   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:05.761078  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:05.761090  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:05.786479  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:05.786515  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:05.814400  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:05.814426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:08.372748  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:08.382940  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:08.383032  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:08.406822  303437 cri.go:89] found id: ""
	I1210 07:12:08.406851  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.406860  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:08.406867  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:08.406931  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:08.431746  303437 cri.go:89] found id: ""
	I1210 07:12:08.431775  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.431786  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:08.431795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:08.431857  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:08.456129  303437 cri.go:89] found id: ""
	I1210 07:12:08.456152  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.456161  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:08.456167  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:08.456226  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:08.481945  303437 cri.go:89] found id: ""
	I1210 07:12:08.481981  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.481990  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:08.481997  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:08.482070  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:08.511057  303437 cri.go:89] found id: ""
	I1210 07:12:08.511080  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.511089  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:08.511095  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:08.511165  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:08.537072  303437 cri.go:89] found id: ""
	I1210 07:12:08.537094  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.537106  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:08.537113  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:08.537188  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:08.562930  303437 cri.go:89] found id: ""
	I1210 07:12:08.562961  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.562970  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:08.562992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:08.563116  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:08.587421  303437 cri.go:89] found id: ""
	I1210 07:12:08.587446  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.587455  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:08.587464  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:08.587501  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:08.646970  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:08.647003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:08.661398  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:08.661426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:08.746222  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:08.738312   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.739170   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.740871   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.741378   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.742946   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:08.738312   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.739170   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.740871   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.741378   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.742946   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:08.746254  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:08.746267  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:08.772476  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:08.772510  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:11.303459  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:11.315726  303437 out.go:203] 
	W1210 07:12:11.316890  303437 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:12:11.316924  303437 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:12:11.316933  303437 out.go:285] * Related issues:
	* Related issues:
	W1210 07:12:11.316946  303437 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:12:11.316957  303437 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:12:11.318146  303437 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-168808
helpers_test.go:244: (dbg) docker inspect newest-cni-168808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3",
	        "Created": "2025-12-10T06:55:56.205654512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303574,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:06:01.504514541Z",
	            "FinishedAt": "2025-12-10T07:05:59.862084086Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/hosts",
	        "LogPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3-json.log",
	        "Name": "/newest-cni-168808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-168808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-168808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3",
	                "LowerDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-168808",
	                "Source": "/var/lib/docker/volumes/newest-cni-168808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-168808",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-168808",
	                "name.minikube.sigs.k8s.io": "newest-cni-168808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "515b233ea68ef1c9ed300584d10d72421aa77f4775a69279a293bdf725b2e113",
	            "SandboxKey": "/var/run/docker/netns/515b233ea68e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-168808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:e3:f7:16:bb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fedd4ad26097ebf6757101ef8e22a141acd4ba740aa95d5f1eab7ffc232007f5",
	                    "EndpointID": "058f1c535f16248f59aad5f1fc5aceccd4ce55e84235161b803daa93fdc8a70f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-168808",
	                        "7d1db3aa80a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808: exit status 2 (331.801191ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-168808 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-168808 logs -n 25: (1.535457138s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ image   │ embed-certs-451123 image list --format=json                                                                                                                                                                                                              │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ pause   │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ unpause │ -p embed-certs-451123 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p disable-driver-mounts-595993                                                                                                                                                                                                                          │ disable-driver-mounts-595993 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ stop    │ -p default-k8s-diff-port-395269 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-395269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:55 UTC │
	│ image   │ default-k8s-diff-port-395269 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ pause   │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ unpause │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-320236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │                     │
	│ stop    │ -p no-preload-320236 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ addons  │ enable dashboard -p no-preload-320236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ start   │ -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-168808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:04 UTC │                     │
	│ stop    │ -p newest-cni-168808 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │ 10 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-168808 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:06 UTC │ 10 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:06:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:06:00.999721  303437 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:06:00.999928  303437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:06:00.999941  303437 out.go:374] Setting ErrFile to fd 2...
	I1210 07:06:00.999948  303437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:06:01.000291  303437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 07:06:01.000840  303437 out.go:368] Setting JSON to false
	I1210 07:06:01.001958  303437 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6511,"bootTime":1765343850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:06:01.002049  303437 start.go:143] virtualization:  
	I1210 07:06:01.005229  303437 out.go:179] * [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:06:01.009127  303437 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:06:01.009191  303437 notify.go:221] Checking for updates...
	I1210 07:06:01.015115  303437 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:06:01.018047  303437 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:01.021396  303437 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 07:06:01.024347  303437 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:06:01.027298  303437 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:06:01.030670  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:01.031359  303437 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:06:01.059280  303437 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:06:01.059409  303437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:06:01.117784  303437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:06:01.1083965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:06:01.117913  303437 docker.go:319] overlay module found
	I1210 07:06:01.121244  303437 out.go:179] * Using the docker driver based on existing profile
	I1210 07:06:01.124129  303437 start.go:309] selected driver: docker
	I1210 07:06:01.124152  303437 start.go:927] validating driver "docker" against &{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:01.124257  303437 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:06:01.124971  303437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:06:01.177684  303437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:06:01.168448125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:06:01.178039  303437 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:06:01.178072  303437 cni.go:84] Creating CNI manager for ""
	I1210 07:06:01.178124  303437 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:06:01.178165  303437 start.go:353] cluster config:
	{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:01.183109  303437 out.go:179] * Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	I1210 07:06:01.185906  303437 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:06:01.188882  303437 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:06:01.191653  303437 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:06:01.191725  303437 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:06:01.211624  303437 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:06:01.211647  303437 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:06:01.245655  303437 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 07:06:01.410333  303437 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 07:06:01.410482  303437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 07:06:01.410710  303437 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:06:01.410741  303437 start.go:360] acquireMachinesLock for newest-cni-168808: {Name:mk3e9e7ddd89f37944cb01c45725514e76e5ba82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:01.410794  303437 start.go:364] duration metric: took 32.001µs to acquireMachinesLock for "newest-cni-168808"
	I1210 07:06:01.410811  303437 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:06:01.410817  303437 fix.go:54] fixHost starting: 
	I1210 07:06:01.411108  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:01.411381  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.445269  303437 fix.go:112] recreateIfNeeded on newest-cni-168808: state=Stopped err=<nil>
	W1210 07:06:01.445299  303437 fix.go:138] unexpected machine state, will restart: <nil>
	W1210 07:05:57.413623  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:59.413665  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:01.448589  303437 out.go:252] * Restarting existing docker container for "newest-cni-168808" ...
	I1210 07:06:01.448678  303437 cli_runner.go:164] Run: docker start newest-cni-168808
	I1210 07:06:01.609744  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.770299  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:01.790186  303437 kic.go:430] container "newest-cni-168808" state is running.
	I1210 07:06:01.790574  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:01.816467  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.816783  303437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 07:06:01.816990  303437 machine.go:94] provisionDockerMachine start ...
	I1210 07:06:01.817053  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:01.864829  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:01.865171  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:01.865181  303437 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:06:01.865918  303437 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:06:02.031349  303437 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031449  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:06:02.031458  303437 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.682µs
	I1210 07:06:02.031466  303437 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:06:02.031488  303437 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031520  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:06:02.031525  303437 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 49.765µs
	I1210 07:06:02.031536  303437 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031546  303437 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031572  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:06:02.031577  303437 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 32µs
	I1210 07:06:02.031583  303437 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031592  303437 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031616  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:06:02.031621  303437 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 30.351µs
	I1210 07:06:02.031626  303437 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031635  303437 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031658  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 07:06:02.031663  303437 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 29.047µs
	I1210 07:06:02.031668  303437 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031676  303437 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031702  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:06:02.031711  303437 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.042µs
	I1210 07:06:02.031716  303437 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:06:02.031725  303437 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031752  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 07:06:02.031757  303437 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 32.509µs
	I1210 07:06:02.031762  303437 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 07:06:02.031770  303437 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031794  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:06:02.031799  303437 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.973µs
	I1210 07:06:02.031809  303437 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:06:02.031817  303437 cache.go:87] Successfully saved all images to host disk.
	I1210 07:06:05.019038  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 07:06:05.019065  303437 ubuntu.go:182] provisioning hostname "newest-cni-168808"
	I1210 07:06:05.019142  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.038167  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:05.038497  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:05.038514  303437 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-168808 && echo "newest-cni-168808" | sudo tee /etc/hostname
	I1210 07:06:05.212495  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 07:06:05.212574  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.236676  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:05.236997  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:05.237020  303437 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-168808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-168808/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-168808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:06:05.387591  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:06:05.387661  303437 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 07:06:05.387701  303437 ubuntu.go:190] setting up certificates
	I1210 07:06:05.387718  303437 provision.go:84] configureAuth start
	I1210 07:06:05.387781  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:05.406720  303437 provision.go:143] copyHostCerts
	I1210 07:06:05.406812  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 07:06:05.406827  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 07:06:05.406903  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 07:06:05.407068  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 07:06:05.407080  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 07:06:05.407115  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 07:06:05.409257  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 07:06:05.409288  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 07:06:05.409367  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 07:06:05.409470  303437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-168808 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-168808]
	I1210 07:06:05.457283  303437 provision.go:177] copyRemoteCerts
	I1210 07:06:05.457369  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:06:05.457416  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.474754  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.578879  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:06:05.596686  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:06:05.614316  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:06:05.632529  303437 provision.go:87] duration metric: took 244.787433ms to configureAuth
	I1210 07:06:05.632557  303437 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:06:05.632770  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:05.632780  303437 machine.go:97] duration metric: took 3.815782677s to provisionDockerMachine
	I1210 07:06:05.632794  303437 start.go:293] postStartSetup for "newest-cni-168808" (driver="docker")
	I1210 07:06:05.632814  303437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:06:05.632866  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:06:05.632909  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.651511  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.755084  303437 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:06:05.758541  303437 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:06:05.758569  303437 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:06:05.758581  303437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 07:06:05.758636  303437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 07:06:05.758716  303437 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 07:06:05.758818  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:06:05.766638  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:06:05.784153  303437 start.go:296] duration metric: took 151.337167ms for postStartSetup
	I1210 07:06:05.784245  303437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:06:05.784296  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.801680  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.903956  303437 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:06:05.910414  303437 fix.go:56] duration metric: took 4.499590898s for fixHost
	I1210 07:06:05.910487  303437 start.go:83] releasing machines lock for "newest-cni-168808", held for 4.499684126s
	I1210 07:06:05.910597  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:05.931294  303437 ssh_runner.go:195] Run: cat /version.json
	I1210 07:06:05.931352  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.933029  303437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:06:05.933104  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.966773  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.968660  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	W1210 07:06:01.914114  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:04.412714  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:06.413234  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:06.164421  303437 ssh_runner.go:195] Run: systemctl --version
	I1210 07:06:06.170684  303437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:06:06.174920  303437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:06:06.174984  303437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:06:06.182557  303437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:06:06.182578  303437 start.go:496] detecting cgroup driver to use...
	I1210 07:06:06.182611  303437 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:06:06.182660  303437 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:06:06.200334  303437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:06:06.213740  303437 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:06:06.213811  303437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:06:06.229308  303437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:06:06.242262  303437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:06:06.362603  303437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:06:06.483045  303437 docker.go:234] disabling docker service ...
	I1210 07:06:06.483112  303437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:06:06.498250  303437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:06:06.511747  303437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:06:06.628460  303437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:06:06.766872  303437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:06:06.779978  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:06:06.794352  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:06.943808  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:06:06.954116  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:06:06.962677  303437 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:06:06.962740  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:06:06.971255  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:06:06.980030  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:06:06.988476  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:06:07.007850  303437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:06:07.016475  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:06:07.025456  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:06:07.034855  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:06:07.044266  303437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:06:07.052503  303437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:06:07.060278  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:07.175410  303437 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:06:07.276715  303437 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:06:07.276786  303437 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:06:07.280624  303437 start.go:564] Will wait 60s for crictl version
	I1210 07:06:07.280687  303437 ssh_runner.go:195] Run: which crictl
	I1210 07:06:07.284270  303437 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:06:07.312279  303437 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:06:07.312345  303437 ssh_runner.go:195] Run: containerd --version
	I1210 07:06:07.332603  303437 ssh_runner.go:195] Run: containerd --version
	I1210 07:06:07.358017  303437 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 07:06:07.360940  303437 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:06:07.377362  303437 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:06:07.381128  303437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:06:07.393654  303437 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:06:07.396326  303437 kubeadm.go:884] updating cluster {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:06:07.396576  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.559787  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.709730  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.859001  303437 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:06:07.859128  303437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:06:07.883821  303437 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:06:07.883846  303437 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:06:07.883855  303437 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 07:06:07.883958  303437 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-168808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:06:07.884031  303437 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:06:07.913929  303437 cni.go:84] Creating CNI manager for ""
	I1210 07:06:07.913952  303437 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:06:07.913973  303437 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:06:07.913999  303437 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-168808 NodeName:newest-cni-168808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:06:07.914120  303437 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-168808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:06:07.914189  303437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:06:07.921856  303437 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:06:07.921924  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:06:07.929166  303437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 07:06:07.941324  303437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:06:07.954047  303437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1210 07:06:07.966208  303437 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:06:07.969747  303437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:06:07.979238  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:08.094271  303437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:06:08.111901  303437 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808 for IP: 192.168.76.2
	I1210 07:06:08.111935  303437 certs.go:195] generating shared ca certs ...
	I1210 07:06:08.111952  303437 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.112156  303437 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 07:06:08.112239  303437 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 07:06:08.112261  303437 certs.go:257] generating profile certs ...
	I1210 07:06:08.112411  303437 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key
	I1210 07:06:08.112508  303437 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb
	I1210 07:06:08.112594  303437 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key
	I1210 07:06:08.112776  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 07:06:08.112825  303437 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 07:06:08.112863  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:06:08.112899  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:06:08.112950  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:06:08.112979  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 07:06:08.113053  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:06:08.113737  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:06:08.131868  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:06:08.149347  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:06:08.173211  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:06:08.201112  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:06:08.217931  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:06:08.234927  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:06:08.255525  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:06:08.274117  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 07:06:08.291924  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:06:08.309223  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 07:06:08.326082  303437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:06:08.338602  303437 ssh_runner.go:195] Run: openssl version
	I1210 07:06:08.345277  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.353152  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:06:08.360717  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.364534  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.364612  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.406623  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:06:08.414672  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.422361  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 07:06:08.430022  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.433878  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.433973  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.475572  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:06:08.483285  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.491000  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 07:06:08.498512  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.502241  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.502306  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.543558  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:06:08.551469  303437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:06:08.555461  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:06:08.597134  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:06:08.638002  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:06:08.678965  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:06:08.720427  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:06:08.763492  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:06:08.809518  303437 kubeadm.go:401] StartCluster: {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:08.809633  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:06:08.809696  303437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:06:08.836487  303437 cri.go:89] found id: ""
	I1210 07:06:08.836609  303437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:06:08.844505  303437 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:06:08.844525  303437 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:06:08.844604  303437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:06:08.852026  303437 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:06:08.852667  303437 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-168808" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:08.852944  303437 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-168808" cluster setting kubeconfig missing "newest-cni-168808" context setting]
	I1210 07:06:08.853395  303437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.854743  303437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:06:08.863687  303437 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 07:06:08.863719  303437 kubeadm.go:602] duration metric: took 19.187765ms to restartPrimaryControlPlane
	I1210 07:06:08.863729  303437 kubeadm.go:403] duration metric: took 54.219605ms to StartCluster
	I1210 07:06:08.863764  303437 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.863854  303437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:08.864943  303437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.865201  303437 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:06:08.865553  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:08.865626  303437 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:06:08.865710  303437 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-168808"
	I1210 07:06:08.865725  303437 addons.go:70] Setting dashboard=true in profile "newest-cni-168808"
	I1210 07:06:08.865738  303437 addons.go:70] Setting default-storageclass=true in profile "newest-cni-168808"
	I1210 07:06:08.865748  303437 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-168808"
	I1210 07:06:08.865755  303437 addons.go:239] Setting addon dashboard=true in "newest-cni-168808"
	W1210 07:06:08.865763  303437 addons.go:248] addon dashboard should already be in state true
	I1210 07:06:08.865787  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.866234  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.865732  303437 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-168808"
	I1210 07:06:08.866264  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.866892  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.866245  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.870618  303437 out.go:179] * Verifying Kubernetes components...
	I1210 07:06:08.877218  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:08.909365  303437 addons.go:239] Setting addon default-storageclass=true in "newest-cni-168808"
	I1210 07:06:08.909422  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.909955  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.935168  303437 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:06:08.938081  303437 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:06:08.938245  303437 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:06:08.941690  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:06:08.941720  303437 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:06:08.941756  303437 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:08.941772  303437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:06:08.941809  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:08.941835  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:08.974920  303437 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:08.974945  303437 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:06:08.975007  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:09.018425  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.019111  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.028670  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.182128  303437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:06:09.189848  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:09.218621  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:06:09.218696  303437 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:06:09.233237  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:09.248580  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:06:09.248655  303437 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:06:09.280152  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:06:09.280225  303437 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:06:09.294171  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:06:09.294239  303437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:06:09.308986  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:06:09.309057  303437 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:06:09.323118  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:06:09.323195  303437 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:06:09.337212  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:06:09.337284  303437 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:06:09.351939  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:06:09.352006  303437 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:06:09.364684  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:09.364749  303437 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:06:09.377472  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:09.912036  303437 api_server.go:52] waiting for apiserver process to appear ...
	W1210 07:06:09.912102  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912165  303437 retry.go:31] will retry after 137.554553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:09.912180  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912239  303437 retry.go:31] will retry after 162.08127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912111  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:09.912371  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912391  303437 retry.go:31] will retry after 156.096194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.049986  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:10.068682  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:10.075250  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:10.139495  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.139526  303437 retry.go:31] will retry after 525.238587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.196161  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.196246  303437 retry.go:31] will retry after 422.355289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.196206  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.196316  303437 retry.go:31] will retry after 388.387448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.412254  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:10.585608  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:10.619095  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:10.648889  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.648984  303437 retry.go:31] will retry after 452.281973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.665111  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:10.718838  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.718922  303437 retry.go:31] will retry after 323.626302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.751170  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.751201  303437 retry.go:31] will retry after 426.205037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.912296  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:08.413486  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:10.912684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:11.043189  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:11.101706  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:11.108011  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.108097  303437 retry.go:31] will retry after 465.500211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:11.171627  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.171733  303437 retry.go:31] will retry after 644.635053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.177835  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:11.248736  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.248773  303437 retry.go:31] will retry after 646.277835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.413044  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:11.574386  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:11.635719  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.635755  303437 retry.go:31] will retry after 992.827501ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.816838  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:11.874310  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.874341  303437 retry.go:31] will retry after 847.092889ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.895446  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:11.912890  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:11.979233  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.979274  303437 retry.go:31] will retry after 1.723803171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.412929  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:12.629708  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:12.711328  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.711402  303437 retry.go:31] will retry after 1.682909305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.721580  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:12.787715  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.787755  303437 retry.go:31] will retry after 1.523563907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.912980  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:13.412270  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:13.704137  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:13.769291  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:13.769319  303437 retry.go:31] will retry after 2.655752177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:13.912604  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:14.312036  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:14.379977  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.380010  303437 retry.go:31] will retry after 2.120509482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.395420  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:14.412979  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:14.494970  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.495005  303437 retry.go:31] will retry after 2.083776468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.913027  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:15.412429  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:15.912376  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:12.913304  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:15.412868  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:16.412255  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:16.425325  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:16.500296  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.500325  303437 retry.go:31] will retry after 1.753545178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.501400  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:16.562473  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.562506  303437 retry.go:31] will retry after 5.63085781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.579894  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:16.640721  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.640756  303437 retry.go:31] will retry after 2.710169887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.912245  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:17.412350  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:17.913142  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:18.254741  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:18.317147  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:18.317176  303437 retry.go:31] will retry after 6.057763532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:18.412287  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:18.912752  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:19.352062  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:19.412870  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:19.413382  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:19.413410  303437 retry.go:31] will retry after 6.763226999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:19.913016  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:20.412997  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:20.913098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:17.413684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:19.913294  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:21.412278  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:21.913122  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:22.194391  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:22.251091  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:22.251123  303437 retry.go:31] will retry after 9.11395006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:22.412163  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:22.912351  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:23.412284  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:23.913156  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:24.375236  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:24.412827  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:24.440293  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:24.440322  303437 retry.go:31] will retry after 9.4401753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:24.912889  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:25.412233  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:25.912307  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:21.913508  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:23.913605  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:26.413204  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:26.177306  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:26.250932  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:26.250965  303437 retry.go:31] will retry after 5.997165797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:26.412268  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:26.912230  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:27.412900  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:27.912402  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:28.412186  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:28.912521  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:29.412227  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:29.912255  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:30.413237  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:30.912254  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:28.413461  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:30.913644  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:31.366162  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:31.412559  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:31.439835  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:31.439865  303437 retry.go:31] will retry after 9.181638872s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:31.912411  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:32.248486  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:32.313416  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:32.313450  303437 retry.go:31] will retry after 9.93876945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:32.412880  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:32.912746  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:33.412590  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:33.880694  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:33.912312  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:33.964338  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:33.964372  303437 retry.go:31] will retry after 6.698338092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:34.413098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:34.912991  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:35.413188  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:35.912404  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:33.413489  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:35.913510  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:38.413592  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:40.413124  296020 node_ready.go:38] duration metric: took 6m0.00088218s for node "no-preload-320236" to be "Ready" ...
	I1210 07:06:40.416430  296020 out.go:203] 
	W1210 07:06:40.419386  296020 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:06:40.419405  296020 out.go:285] * 
	W1210 07:06:40.421537  296020 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:06:40.424792  296020 out.go:203] 
	I1210 07:06:36.412320  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:36.912280  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:37.412192  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:37.912490  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:38.412402  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:38.912902  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:39.412781  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:39.912868  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:40.413057  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:40.621960  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:40.663144  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:40.779058  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.779095  303437 retry.go:31] will retry after 16.870406936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:40.830377  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.830410  303437 retry.go:31] will retry after 13.844749205s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.912652  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:41.412296  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:41.912802  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:42.252520  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:42.323589  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:42.323630  303437 retry.go:31] will retry after 27.422515535s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:42.412805  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:42.912953  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:43.412903  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:43.912754  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:44.412272  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:44.912265  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:45.412790  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:45.912791  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:46.413202  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:46.912321  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:47.412292  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:47.912507  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:48.412885  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:48.912342  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:49.413070  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:49.912837  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:50.412236  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:50.912907  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:51.412287  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:51.913181  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:52.412208  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:52.912275  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:53.412923  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:53.912230  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:54.412280  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:54.676234  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:54.749679  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:54.749717  303437 retry.go:31] will retry after 32.358913109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:54.913072  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:55.412886  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:55.913073  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:56.412961  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:56.912198  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:57.412942  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:57.649751  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:57.723910  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:57.723937  303437 retry.go:31] will retry after 19.76255611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:57.912185  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:58.412253  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:58.912817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:59.412285  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:59.912592  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:00.412249  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:00.912270  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:01.412382  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:01.912282  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:02.412190  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:02.912865  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:03.412818  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:03.912286  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:04.412820  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:04.913148  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:05.412411  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:05.912250  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:06.412297  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:06.913174  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:07.412239  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:07.912324  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:08.412210  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:08.912197  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:08.912278  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:08.940273  303437 cri.go:89] found id: ""
	I1210 07:07:08.940300  303437 logs.go:282] 0 containers: []
	W1210 07:07:08.940309  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:08.940316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:08.940374  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:08.976821  303437 cri.go:89] found id: ""
	I1210 07:07:08.976848  303437 logs.go:282] 0 containers: []
	W1210 07:07:08.976857  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:08.976863  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:08.976928  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:09.004516  303437 cri.go:89] found id: ""
	I1210 07:07:09.004546  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.004555  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:09.004561  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:09.004633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:09.029569  303437 cri.go:89] found id: ""
	I1210 07:07:09.029593  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.029602  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:09.029609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:09.029666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:09.055232  303437 cri.go:89] found id: ""
	I1210 07:07:09.055256  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.055265  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:09.055281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:09.055342  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:09.080957  303437 cri.go:89] found id: ""
	I1210 07:07:09.080978  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.080986  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:09.080992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:09.081051  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:09.105491  303437 cri.go:89] found id: ""
	I1210 07:07:09.105561  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.105583  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:09.105603  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:09.105682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:09.129839  303437 cri.go:89] found id: ""
	I1210 07:07:09.129861  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.129870  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:09.129879  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:09.129890  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:09.157418  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:09.157444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:09.218619  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:09.218655  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:09.233569  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:09.233598  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:09.299933  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:09.291172    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.291836    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.293591    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.295225    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.296375    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:09.291172    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.291836    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.293591    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.295225    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.296375    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:09.299954  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:09.299968  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:09.746365  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:07:09.810849  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:09.810882  303437 retry.go:31] will retry after 38.106772232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:11.825038  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:11.835407  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:11.835491  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:11.859384  303437 cri.go:89] found id: ""
	I1210 07:07:11.859407  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.859416  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:11.859422  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:11.859482  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:11.883645  303437 cri.go:89] found id: ""
	I1210 07:07:11.883667  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.883677  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:11.883683  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:11.883746  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:11.912907  303437 cri.go:89] found id: ""
	I1210 07:07:11.912987  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.913010  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:11.913029  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:11.913135  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:11.954332  303437 cri.go:89] found id: ""
	I1210 07:07:11.954354  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.954363  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:11.954369  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:11.954447  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:11.987932  303437 cri.go:89] found id: ""
	I1210 07:07:11.988008  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.988024  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:11.988048  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:11.988134  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:12.016019  303437 cri.go:89] found id: ""
	I1210 07:07:12.016043  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.016052  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:12.016059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:12.016161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:12.041574  303437 cri.go:89] found id: ""
	I1210 07:07:12.041616  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.041625  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:12.041633  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:12.041702  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:12.067242  303437 cri.go:89] found id: ""
	I1210 07:07:12.067309  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.067335  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:12.067351  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:12.067368  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:12.080423  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:12.080492  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:12.142902  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:12.135099    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.135777    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.137430    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.138066    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.139703    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:12.135099    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.135777    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.137430    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.138066    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.139703    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:12.142926  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:12.142940  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:12.170013  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:12.170095  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:12.205843  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:12.205871  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:14.769151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:14.779543  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:14.779628  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:14.804854  303437 cri.go:89] found id: ""
	I1210 07:07:14.804877  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.804885  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:14.804892  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:14.804951  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:14.829499  303437 cri.go:89] found id: ""
	I1210 07:07:14.829521  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.829529  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:14.829535  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:14.829592  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:14.857960  303437 cri.go:89] found id: ""
	I1210 07:07:14.857984  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.857993  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:14.858000  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:14.858058  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:14.882942  303437 cri.go:89] found id: ""
	I1210 07:07:14.882964  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.882972  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:14.882978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:14.883074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:14.906556  303437 cri.go:89] found id: ""
	I1210 07:07:14.906582  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.906591  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:14.906598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:14.906653  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:14.944744  303437 cri.go:89] found id: ""
	I1210 07:07:14.944771  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.944780  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:14.944796  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:14.944859  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:14.974225  303437 cri.go:89] found id: ""
	I1210 07:07:14.974248  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.974256  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:14.974263  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:14.974323  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:15.005431  303437 cri.go:89] found id: ""
	I1210 07:07:15.005515  303437 logs.go:282] 0 containers: []
	W1210 07:07:15.005539  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:15.005564  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:15.005607  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:15.075329  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:15.067284    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.067872    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069470    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069869    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.071556    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:15.067284    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.067872    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069470    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069869    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.071556    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:15.075363  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:15.075376  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:15.100635  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:15.100670  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:15.129987  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:15.130013  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:15.198219  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:15.198300  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:17.487235  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:07:17.543553  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:17.543587  303437 retry.go:31] will retry after 31.69876155s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:17.712834  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:17.723193  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:17.723262  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:17.747430  303437 cri.go:89] found id: ""
	I1210 07:07:17.747453  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.747462  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:17.747468  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:17.747525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:17.771960  303437 cri.go:89] found id: ""
	I1210 07:07:17.771982  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.771990  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:17.771996  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:17.772060  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:17.796155  303437 cri.go:89] found id: ""
	I1210 07:07:17.796176  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.796184  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:17.796190  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:17.796251  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:17.825359  303437 cri.go:89] found id: ""
	I1210 07:07:17.825385  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.825394  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:17.825401  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:17.825462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:17.853147  303437 cri.go:89] found id: ""
	I1210 07:07:17.853170  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.853178  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:17.853184  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:17.853243  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:17.878806  303437 cri.go:89] found id: ""
	I1210 07:07:17.878830  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.878839  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:17.878846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:17.878905  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:17.902975  303437 cri.go:89] found id: ""
	I1210 07:07:17.902999  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.903007  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:17.903037  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:17.903112  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:17.934568  303437 cri.go:89] found id: ""
	I1210 07:07:17.934592  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.934600  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:17.934610  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:17.934621  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:17.999695  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:17.999740  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:18.029219  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:18.029256  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:18.094199  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:18.085818    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.086373    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.088284    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.089006    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.090767    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:18.085818    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.086373    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.088284    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.089006    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.090767    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:18.094223  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:18.094238  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:18.120245  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:18.120283  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:20.649514  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:20.661165  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:20.661236  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:20.686549  303437 cri.go:89] found id: ""
	I1210 07:07:20.686572  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.686581  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:20.686587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:20.686654  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:20.711873  303437 cri.go:89] found id: ""
	I1210 07:07:20.711895  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.711903  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:20.711910  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:20.711968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:20.736261  303437 cri.go:89] found id: ""
	I1210 07:07:20.736283  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.736292  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:20.736298  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:20.736360  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:20.765759  303437 cri.go:89] found id: ""
	I1210 07:07:20.765781  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.765797  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:20.765804  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:20.765862  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:20.793639  303437 cri.go:89] found id: ""
	I1210 07:07:20.793661  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.793669  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:20.793675  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:20.793751  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:20.818318  303437 cri.go:89] found id: ""
	I1210 07:07:20.818339  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.818347  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:20.818354  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:20.818417  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:20.843499  303437 cri.go:89] found id: ""
	I1210 07:07:20.843523  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.843533  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:20.843539  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:20.843598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:20.868745  303437 cri.go:89] found id: ""
	I1210 07:07:20.868768  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.868776  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:20.868785  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:20.868796  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:20.897905  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:20.897981  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:20.962576  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:20.962654  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:20.977746  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:20.977835  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:21.045052  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:21.037239    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.037840    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.039622    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.040051    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.041574    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:21.037239    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.037840    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.039622    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.040051    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.041574    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:21.045073  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:21.045085  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:23.570777  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:23.580946  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:23.581021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:23.605355  303437 cri.go:89] found id: ""
	I1210 07:07:23.605379  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.605388  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:23.605394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:23.605451  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:23.632675  303437 cri.go:89] found id: ""
	I1210 07:07:23.632697  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.632706  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:23.632713  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:23.632783  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:23.656579  303437 cri.go:89] found id: ""
	I1210 07:07:23.656602  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.656610  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:23.656617  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:23.656675  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:23.684796  303437 cri.go:89] found id: ""
	I1210 07:07:23.684816  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.684825  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:23.684832  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:23.684893  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:23.709043  303437 cri.go:89] found id: ""
	I1210 07:07:23.709064  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.709073  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:23.709079  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:23.709149  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:23.733315  303437 cri.go:89] found id: ""
	I1210 07:07:23.733340  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.733348  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:23.733355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:23.733413  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:23.761492  303437 cri.go:89] found id: ""
	I1210 07:07:23.761514  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.761524  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:23.761530  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:23.761586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:23.786489  303437 cri.go:89] found id: ""
	I1210 07:07:23.786511  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.786520  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:23.786530  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:23.786540  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:23.812193  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:23.812231  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:23.842956  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:23.842990  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:23.898018  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:23.898052  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:23.912477  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:23.912507  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:23.996757  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:23.988502    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.989251    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.990734    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.991360    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.993170    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:23.988502    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.989251    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.990734    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.991360    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.993170    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:26.497835  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:26.508472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:26.508547  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:26.533241  303437 cri.go:89] found id: ""
	I1210 07:07:26.533264  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.533272  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:26.533279  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:26.533337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:26.558844  303437 cri.go:89] found id: ""
	I1210 07:07:26.558868  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.558877  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:26.558883  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:26.558941  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:26.584008  303437 cri.go:89] found id: ""
	I1210 07:07:26.584042  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.584051  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:26.584058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:26.584176  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:26.609123  303437 cri.go:89] found id: ""
	I1210 07:07:26.609145  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.609153  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:26.609160  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:26.609220  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:26.633105  303437 cri.go:89] found id: ""
	I1210 07:07:26.633127  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.633136  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:26.633142  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:26.633220  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:26.662834  303437 cri.go:89] found id: ""
	I1210 07:07:26.662858  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.662875  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:26.662897  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:26.662989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:26.688296  303437 cri.go:89] found id: ""
	I1210 07:07:26.688318  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.688326  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:26.688332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:26.688401  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:26.714475  303437 cri.go:89] found id: ""
	I1210 07:07:26.714545  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.714564  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:26.714595  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:26.714609  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:26.769794  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:26.769827  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:26.782871  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:26.782909  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:26.843846  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:26.836227    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.836952    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838581    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838893    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.840461    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:26.836227    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.836952    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838581    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838893    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.840461    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:26.843867  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:26.843881  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:26.869319  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:26.869353  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:27.109532  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:07:27.174544  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:27.174590  303437 retry.go:31] will retry after 31.997742819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:29.396194  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:29.406428  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:29.406498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:29.433424  303437 cri.go:89] found id: ""
	I1210 07:07:29.433455  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.433465  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:29.433471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:29.433536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:29.463589  303437 cri.go:89] found id: ""
	I1210 07:07:29.463615  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.463624  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:29.463630  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:29.463686  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:29.492343  303437 cri.go:89] found id: ""
	I1210 07:07:29.492365  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.492374  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:29.492380  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:29.492437  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:29.516069  303437 cri.go:89] found id: ""
	I1210 07:07:29.516097  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.516106  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:29.516113  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:29.516171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:29.539661  303437 cri.go:89] found id: ""
	I1210 07:07:29.539693  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.539703  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:29.539712  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:29.539781  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:29.563791  303437 cri.go:89] found id: ""
	I1210 07:07:29.563814  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.563823  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:29.563829  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:29.563887  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:29.589136  303437 cri.go:89] found id: ""
	I1210 07:07:29.589160  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.589168  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:29.589175  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:29.589233  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:29.614701  303437 cri.go:89] found id: ""
	I1210 07:07:29.614724  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.614734  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:29.614743  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:29.614756  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:29.670207  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:29.670240  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:29.683977  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:29.684005  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:29.748039  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:29.739856    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.740576    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742311    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742892    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.744661    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:29.739856    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.740576    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742311    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742892    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.744661    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:29.748061  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:29.748077  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:29.772992  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:29.773024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:32.300508  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:32.310795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:32.310865  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:32.334361  303437 cri.go:89] found id: ""
	I1210 07:07:32.334387  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.334396  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:32.334403  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:32.334478  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:32.361534  303437 cri.go:89] found id: ""
	I1210 07:07:32.361627  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.361651  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:32.361681  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:32.361764  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:32.386488  303437 cri.go:89] found id: ""
	I1210 07:07:32.386513  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.386521  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:32.386528  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:32.386588  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:32.415239  303437 cri.go:89] found id: ""
	I1210 07:07:32.415265  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.415274  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:32.415280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:32.415340  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:32.443074  303437 cri.go:89] found id: ""
	I1210 07:07:32.443097  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.443105  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:32.443111  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:32.443170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:32.477593  303437 cri.go:89] found id: ""
	I1210 07:07:32.477620  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.477629  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:32.477636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:32.477693  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:32.502550  303437 cri.go:89] found id: ""
	I1210 07:07:32.502575  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.502584  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:32.502590  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:32.502666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:32.527562  303437 cri.go:89] found id: ""
	I1210 07:07:32.527585  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.527606  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:32.527616  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:32.527632  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:32.588732  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:32.581238    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.581723    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583416    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583864    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.585321    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:32.581238    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.581723    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583416    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583864    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.585321    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:32.588755  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:32.588767  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:32.614322  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:32.614354  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:32.642747  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:32.642777  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:32.697541  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:32.697576  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:35.211281  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:35.221258  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:35.221336  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:35.253168  303437 cri.go:89] found id: ""
	I1210 07:07:35.253193  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.253203  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:35.253210  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:35.253268  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:35.281234  303437 cri.go:89] found id: ""
	I1210 07:07:35.281257  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.281267  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:35.281273  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:35.281333  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:35.310530  303437 cri.go:89] found id: ""
	I1210 07:07:35.310554  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.310563  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:35.310570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:35.310627  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:35.334764  303437 cri.go:89] found id: ""
	I1210 07:07:35.334792  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.334801  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:35.334813  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:35.334870  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:35.361502  303437 cri.go:89] found id: ""
	I1210 07:07:35.361525  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.361534  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:35.361540  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:35.361607  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:35.389058  303437 cri.go:89] found id: ""
	I1210 07:07:35.389080  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.389089  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:35.389095  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:35.389154  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:35.425176  303437 cri.go:89] found id: ""
	I1210 07:07:35.425215  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.425226  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:35.425232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:35.425299  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:35.453052  303437 cri.go:89] found id: ""
	I1210 07:07:35.453079  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.453088  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:35.453097  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:35.453108  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:35.522148  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:35.513319    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.513889    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.516065    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517374    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517853    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:35.513319    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.513889    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.516065    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517374    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517853    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:35.522174  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:35.522186  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:35.547665  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:35.547698  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:35.575564  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:35.575596  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:35.634362  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:35.634400  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:38.149569  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:38.160486  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:38.160568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:38.201222  303437 cri.go:89] found id: ""
	I1210 07:07:38.201245  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.201253  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:38.201260  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:38.201317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:38.237151  303437 cri.go:89] found id: ""
	I1210 07:07:38.237174  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.237183  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:38.237189  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:38.237259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:38.262732  303437 cri.go:89] found id: ""
	I1210 07:07:38.262760  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.262770  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:38.262777  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:38.262835  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:38.293247  303437 cri.go:89] found id: ""
	I1210 07:07:38.293273  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.293283  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:38.293290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:38.293351  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:38.317818  303437 cri.go:89] found id: ""
	I1210 07:07:38.317840  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.317849  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:38.317855  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:38.317911  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:38.342419  303437 cri.go:89] found id: ""
	I1210 07:07:38.342447  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.342465  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:38.342473  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:38.342545  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:38.367206  303437 cri.go:89] found id: ""
	I1210 07:07:38.367271  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.367295  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:38.367316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:38.367408  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:38.395595  303437 cri.go:89] found id: ""
	I1210 07:07:38.395617  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.395626  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:38.395635  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:38.395646  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:38.455465  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:38.455496  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:38.469974  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:38.470052  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:38.534901  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:38.526759    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.527529    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529111    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529677    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.531428    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:38.526759    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.527529    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529111    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529677    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.531428    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:38.534975  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:38.535033  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:38.560101  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:38.560133  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:41.091155  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:41.101359  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:41.101439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:41.124928  303437 cri.go:89] found id: ""
	I1210 07:07:41.124950  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.124958  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:41.124964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:41.125021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:41.150502  303437 cri.go:89] found id: ""
	I1210 07:07:41.150525  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.150534  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:41.150541  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:41.150597  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:41.175254  303437 cri.go:89] found id: ""
	I1210 07:07:41.175280  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.175289  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:41.175295  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:41.175355  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:41.213279  303437 cri.go:89] found id: ""
	I1210 07:07:41.213302  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.213311  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:41.213317  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:41.213376  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:41.241895  303437 cri.go:89] found id: ""
	I1210 07:07:41.241922  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.241931  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:41.241938  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:41.241997  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:41.266233  303437 cri.go:89] found id: ""
	I1210 07:07:41.266259  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.266274  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:41.266280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:41.266375  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:41.295481  303437 cri.go:89] found id: ""
	I1210 07:07:41.295503  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.295512  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:41.295519  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:41.295586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:41.325350  303437 cri.go:89] found id: ""
	I1210 07:07:41.325372  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.325381  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:41.325390  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:41.325402  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:41.381086  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:41.381121  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:41.394364  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:41.394411  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:41.475813  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:41.467819    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.468574    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.470350    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.471004    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.472517    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:41.467819    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.468574    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.470350    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.471004    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.472517    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:41.475836  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:41.475849  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:41.500717  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:41.500751  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:44.031462  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:44.042099  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:44.042173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:44.066643  303437 cri.go:89] found id: ""
	I1210 07:07:44.066674  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.066683  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:44.066689  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:44.066752  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:44.091511  303437 cri.go:89] found id: ""
	I1210 07:07:44.091533  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.091542  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:44.091548  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:44.091627  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:44.116433  303437 cri.go:89] found id: ""
	I1210 07:07:44.116455  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.116464  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:44.116470  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:44.116527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:44.141546  303437 cri.go:89] found id: ""
	I1210 07:07:44.141568  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.141576  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:44.141583  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:44.141659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:44.183580  303437 cri.go:89] found id: ""
	I1210 07:07:44.183602  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.183610  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:44.183616  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:44.183673  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:44.214628  303437 cri.go:89] found id: ""
	I1210 07:07:44.214651  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.214659  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:44.214666  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:44.214738  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:44.241699  303437 cri.go:89] found id: ""
	I1210 07:07:44.241721  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.241729  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:44.241736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:44.241805  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:44.266706  303437 cri.go:89] found id: ""
	I1210 07:07:44.266729  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.266737  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:44.266746  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:44.266758  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:44.321835  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:44.321867  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:44.335089  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:44.335120  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:44.395294  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:44.387779    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.388344    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389371    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389875    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.391491    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:44.387779    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.388344    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389371    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389875    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.391491    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:44.395360  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:44.395388  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:44.425916  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:44.425956  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:46.965660  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:46.976149  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:46.976221  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:47.003597  303437 cri.go:89] found id: ""
	I1210 07:07:47.003620  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.003629  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:47.003636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:47.003709  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:47.028196  303437 cri.go:89] found id: ""
	I1210 07:07:47.028218  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.028226  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:47.028232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:47.028290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:47.056800  303437 cri.go:89] found id: ""
	I1210 07:07:47.056824  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.056833  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:47.056840  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:47.056916  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:47.081593  303437 cri.go:89] found id: ""
	I1210 07:07:47.081656  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.081678  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:47.081697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:47.081767  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:47.110385  303437 cri.go:89] found id: ""
	I1210 07:07:47.110451  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.110474  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:47.110492  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:47.110563  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:47.136398  303437 cri.go:89] found id: ""
	I1210 07:07:47.136465  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.136490  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:47.136503  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:47.136576  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:47.162521  303437 cri.go:89] found id: ""
	I1210 07:07:47.162545  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.162554  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:47.162560  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:47.162617  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:47.200031  303437 cri.go:89] found id: ""
	I1210 07:07:47.200052  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.200060  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:47.200069  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:47.200080  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:47.240172  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:47.240197  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:47.295589  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:47.295625  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:47.308817  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:47.308843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:47.373455  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:47.365473    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.366342    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.367939    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.368531    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.370138    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:47.365473    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.366342    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.367939    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.368531    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.370138    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:47.373479  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:47.373504  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:47.918542  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:07:48.000256  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:48.000468  303437 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:49.243254  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:07:49.300794  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:49.300885  303437 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:49.898427  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:49.908683  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:49.908754  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:49.934109  303437 cri.go:89] found id: ""
	I1210 07:07:49.934136  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.934145  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:49.934152  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:49.934214  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:49.959202  303437 cri.go:89] found id: ""
	I1210 07:07:49.959226  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.959235  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:49.959252  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:49.959329  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:49.983331  303437 cri.go:89] found id: ""
	I1210 07:07:49.983356  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.983364  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:49.983371  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:49.983427  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:50.012230  303437 cri.go:89] found id: ""
	I1210 07:07:50.012265  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.012274  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:50.012281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:50.012350  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:50.039851  303437 cri.go:89] found id: ""
	I1210 07:07:50.039880  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.039889  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:50.039895  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:50.039962  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:50.071162  303437 cri.go:89] found id: ""
	I1210 07:07:50.071186  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.071195  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:50.071201  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:50.071265  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:50.097095  303437 cri.go:89] found id: ""
	I1210 07:07:50.097118  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.097127  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:50.097134  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:50.097198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:50.121941  303437 cri.go:89] found id: ""
	I1210 07:07:50.121966  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.121976  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:50.121985  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:50.121998  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:50.178251  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:50.178286  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:50.195455  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:50.195491  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:50.283052  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:50.274829    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.275404    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277130    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277736    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.279468    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:50.274829    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.275404    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277130    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277736    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.279468    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:50.283077  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:50.283098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:50.309433  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:50.309472  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:52.837493  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:52.848301  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:52.848370  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:52.872661  303437 cri.go:89] found id: ""
	I1210 07:07:52.872682  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.872690  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:52.872696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:52.872755  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:52.895064  303437 cri.go:89] found id: ""
	I1210 07:07:52.895090  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.895100  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:52.895112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:52.895170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:52.918926  303437 cri.go:89] found id: ""
	I1210 07:07:52.918950  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.918958  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:52.918964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:52.919038  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:52.942801  303437 cri.go:89] found id: ""
	I1210 07:07:52.942823  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.942831  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:52.942838  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:52.942895  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:52.968885  303437 cri.go:89] found id: ""
	I1210 07:07:52.968910  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.968919  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:52.968925  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:52.968984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:52.992050  303437 cri.go:89] found id: ""
	I1210 07:07:52.992072  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.992080  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:52.992087  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:52.992145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:53.020481  303437 cri.go:89] found id: ""
	I1210 07:07:53.020507  303437 logs.go:282] 0 containers: []
	W1210 07:07:53.020516  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:53.020523  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:53.020586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:53.045391  303437 cri.go:89] found id: ""
	I1210 07:07:53.045412  303437 logs.go:282] 0 containers: []
	W1210 07:07:53.045421  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:53.045430  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:53.045441  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:53.100408  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:53.100444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:53.115165  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:53.115192  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:53.192011  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:53.181167    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.181972    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.182929    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.185331    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.186084    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:53.181167    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.181972    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.182929    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.185331    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.186084    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:53.192034  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:53.192049  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:53.220495  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:53.220572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:55.749081  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:55.759242  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:55.759314  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:55.782656  303437 cri.go:89] found id: ""
	I1210 07:07:55.782681  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.782690  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:55.782707  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:55.782766  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:55.807483  303437 cri.go:89] found id: ""
	I1210 07:07:55.807509  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.807527  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:55.807534  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:55.807595  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:55.832851  303437 cri.go:89] found id: ""
	I1210 07:07:55.832887  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.832896  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:55.832906  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:55.832966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:55.857553  303437 cri.go:89] found id: ""
	I1210 07:07:55.857575  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.857584  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:55.857591  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:55.857653  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:55.885207  303437 cri.go:89] found id: ""
	I1210 07:07:55.885230  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.885240  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:55.885246  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:55.885315  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:55.909296  303437 cri.go:89] found id: ""
	I1210 07:07:55.909322  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.909332  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:55.909340  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:55.909398  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:55.933701  303437 cri.go:89] found id: ""
	I1210 07:07:55.933723  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.933733  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:55.933740  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:55.933812  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:55.958095  303437 cri.go:89] found id: ""
	I1210 07:07:55.958121  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.958130  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:55.958139  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:55.958150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:56.028949  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:56.021322    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.021920    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023068    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023441    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.025194    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:56.021322    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.021920    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023068    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023441    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.025194    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:56.028976  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:56.029046  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:56.055269  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:56.055308  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:56.087408  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:56.087438  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:56.143537  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:56.143570  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:58.657737  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:58.669685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:58.669751  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:58.704925  303437 cri.go:89] found id: ""
	I1210 07:07:58.704947  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.704955  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:58.704962  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:58.705021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:58.732775  303437 cri.go:89] found id: ""
	I1210 07:07:58.732798  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.732806  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:58.732812  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:58.732871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:58.757863  303437 cri.go:89] found id: ""
	I1210 07:07:58.757885  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.757893  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:58.757899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:58.757957  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:58.782893  303437 cri.go:89] found id: ""
	I1210 07:07:58.782914  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.782923  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:58.782929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:58.782987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:58.813425  303437 cri.go:89] found id: ""
	I1210 07:07:58.813458  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.813467  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:58.813474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:58.813531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:58.837894  303437 cri.go:89] found id: ""
	I1210 07:07:58.837920  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.837930  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:58.837937  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:58.837994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:58.862767  303437 cri.go:89] found id: ""
	I1210 07:07:58.862793  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.862803  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:58.862810  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:58.862871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:58.887161  303437 cri.go:89] found id: ""
	I1210 07:07:58.887190  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.887203  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:58.887213  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:58.887226  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:58.912742  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:58.912774  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:58.941751  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:58.941778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:58.997499  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:58.997538  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:59.012690  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:59.012716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:59.079032  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:59.071332    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.071853    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.073549    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.074116    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.075663    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:59.071332    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.071853    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.073549    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.074116    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.075663    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:59.173255  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:07:59.241772  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:59.241906  303437 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:59.245162  303437 out.go:179] * Enabled addons: 
	I1210 07:07:59.248019  303437 addons.go:530] duration metric: took 1m50.382393488s for enable addons: enabled=[]
	I1210 07:08:01.579277  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:01.590395  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:01.590469  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:01.616988  303437 cri.go:89] found id: ""
	I1210 07:08:01.617017  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.617025  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:01.617032  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:01.617095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:01.643533  303437 cri.go:89] found id: ""
	I1210 07:08:01.643555  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.643563  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:01.643570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:01.643633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:01.683402  303437 cri.go:89] found id: ""
	I1210 07:08:01.683430  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.683439  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:01.683446  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:01.683507  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:01.714420  303437 cri.go:89] found id: ""
	I1210 07:08:01.714448  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.714457  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:01.714463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:01.714522  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:01.741588  303437 cri.go:89] found id: ""
	I1210 07:08:01.741614  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.741625  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:01.741632  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:01.741697  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:01.766133  303437 cri.go:89] found id: ""
	I1210 07:08:01.766163  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.766172  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:01.766178  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:01.766246  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:01.796151  303437 cri.go:89] found id: ""
	I1210 07:08:01.796173  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.796181  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:01.796188  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:01.796253  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:01.821826  303437 cri.go:89] found id: ""
	I1210 07:08:01.821848  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.821857  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:01.821872  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:01.821883  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:01.856135  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:01.856162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:01.912548  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:01.912582  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:01.926252  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:01.926279  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:01.989471  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:01.981372    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.982086    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.983797    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.984477    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.986161    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:01.981372    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.982086    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.983797    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.984477    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.986161    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:01.989491  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:01.989504  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:04.519169  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:04.529774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:04.529853  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:04.557926  303437 cri.go:89] found id: ""
	I1210 07:08:04.557950  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.557967  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:04.557988  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:04.558067  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:04.585171  303437 cri.go:89] found id: ""
	I1210 07:08:04.585195  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.585204  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:04.585223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:04.585292  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:04.613695  303437 cri.go:89] found id: ""
	I1210 07:08:04.613720  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.613729  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:04.613735  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:04.613808  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:04.637775  303437 cri.go:89] found id: ""
	I1210 07:08:04.637859  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.637880  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:04.637899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:04.637989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:04.673966  303437 cri.go:89] found id: ""
	I1210 07:08:04.674033  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.674057  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:04.674073  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:04.674161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:04.706760  303437 cri.go:89] found id: ""
	I1210 07:08:04.706825  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.706846  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:04.706865  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:04.706955  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:04.748640  303437 cri.go:89] found id: ""
	I1210 07:08:04.748707  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.748731  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:04.748749  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:04.748837  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:04.778179  303437 cri.go:89] found id: ""
	I1210 07:08:04.778241  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.778263  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:04.778283  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:04.778324  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:04.838994  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:04.839038  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:04.852663  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:04.852737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:04.919247  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:04.910928    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.911685    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913313    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913950    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.915537    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:04.910928    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.911685    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913313    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913950    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.915537    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:04.919311  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:04.919346  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:04.944409  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:04.944441  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:07.475233  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:07.485817  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:07.485889  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:07.510450  303437 cri.go:89] found id: ""
	I1210 07:08:07.510473  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.510482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:07.510488  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:07.510549  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:07.536516  303437 cri.go:89] found id: ""
	I1210 07:08:07.536541  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.536550  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:07.536556  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:07.536646  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:07.561868  303437 cri.go:89] found id: ""
	I1210 07:08:07.561893  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.561902  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:07.561908  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:07.561987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:07.590197  303437 cri.go:89] found id: ""
	I1210 07:08:07.590221  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.590230  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:07.590236  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:07.590342  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:07.613514  303437 cri.go:89] found id: ""
	I1210 07:08:07.613539  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.613548  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:07.613555  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:07.613662  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:07.638377  303437 cri.go:89] found id: ""
	I1210 07:08:07.638402  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.638410  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:07.638417  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:07.638477  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:07.667985  303437 cri.go:89] found id: ""
	I1210 07:08:07.668058  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.668082  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:07.668102  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:07.668189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:07.698530  303437 cri.go:89] found id: ""
	I1210 07:08:07.698605  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.698647  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:07.698671  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:07.698710  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:07.761708  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:07.761745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:07.775951  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:07.775978  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:07.842158  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:07.833714    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.834583    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836355    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836801    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.838463    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:07.833714    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.834583    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836355    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836801    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.838463    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:07.842183  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:07.842200  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:07.868656  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:07.868693  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:10.398249  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:10.410905  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:10.410974  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:10.441450  303437 cri.go:89] found id: ""
	I1210 07:08:10.441474  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.441482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:10.441489  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:10.441551  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:10.467324  303437 cri.go:89] found id: ""
	I1210 07:08:10.467345  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.467354  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:10.467360  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:10.467422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:10.490980  303437 cri.go:89] found id: ""
	I1210 07:08:10.491001  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.491117  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:10.491125  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:10.491186  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:10.515608  303437 cri.go:89] found id: ""
	I1210 07:08:10.515673  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.515688  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:10.515696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:10.515753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:10.540198  303437 cri.go:89] found id: ""
	I1210 07:08:10.540223  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.540232  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:10.540246  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:10.540304  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:10.565060  303437 cri.go:89] found id: ""
	I1210 07:08:10.565125  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.565140  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:10.565155  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:10.565219  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:10.593396  303437 cri.go:89] found id: ""
	I1210 07:08:10.593430  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.593438  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:10.593445  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:10.593510  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:10.617363  303437 cri.go:89] found id: ""
	I1210 07:08:10.617395  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.617405  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:10.617414  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:10.617426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:10.677240  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:10.677317  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:10.692150  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:10.692220  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:10.758835  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:10.750046    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.750802    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.753608    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.754055    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.755555    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:10.750046    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.750802    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.753608    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.754055    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.755555    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:10.758906  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:10.758934  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:10.783900  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:10.783935  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:13.316158  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:13.326768  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:13.326841  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:13.354375  303437 cri.go:89] found id: ""
	I1210 07:08:13.354402  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.354411  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:13.354417  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:13.354486  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:13.379439  303437 cri.go:89] found id: ""
	I1210 07:08:13.379467  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.379479  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:13.379491  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:13.379572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:13.406403  303437 cri.go:89] found id: ""
	I1210 07:08:13.406425  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.406433  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:13.406439  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:13.406498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:13.441528  303437 cri.go:89] found id: ""
	I1210 07:08:13.441633  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.441665  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:13.441698  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:13.441887  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:13.485367  303437 cri.go:89] found id: ""
	I1210 07:08:13.485407  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.485416  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:13.485423  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:13.485491  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:13.515544  303437 cri.go:89] found id: ""
	I1210 07:08:13.515572  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.515582  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:13.515588  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:13.515646  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:13.541572  303437 cri.go:89] found id: ""
	I1210 07:08:13.541604  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.541613  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:13.541620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:13.541692  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:13.566335  303437 cri.go:89] found id: ""
	I1210 07:08:13.566366  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.566376  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:13.566385  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:13.566396  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:13.622359  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:13.622391  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:13.635632  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:13.635661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:13.716667  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:13.707886    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.708774    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.710652    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.711407    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.713108    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:13.707886    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.708774    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.710652    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.711407    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.713108    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:13.716691  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:13.716711  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:13.743967  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:13.744002  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:16.273094  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:16.283420  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:16.283488  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:16.307336  303437 cri.go:89] found id: ""
	I1210 07:08:16.307358  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.307366  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:16.307373  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:16.307430  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:16.330448  303437 cri.go:89] found id: ""
	I1210 07:08:16.330476  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.330485  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:16.330492  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:16.330552  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:16.362050  303437 cri.go:89] found id: ""
	I1210 07:08:16.362080  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.362089  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:16.362096  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:16.362172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:16.385708  303437 cri.go:89] found id: ""
	I1210 07:08:16.385732  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.385741  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:16.385747  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:16.385852  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:16.421398  303437 cri.go:89] found id: ""
	I1210 07:08:16.421427  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.421436  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:16.421442  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:16.421509  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:16.449046  303437 cri.go:89] found id: ""
	I1210 07:08:16.449074  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.449082  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:16.449089  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:16.449166  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:16.475499  303437 cri.go:89] found id: ""
	I1210 07:08:16.475525  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.475534  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:16.475541  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:16.475619  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:16.502476  303437 cri.go:89] found id: ""
	I1210 07:08:16.502506  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.502515  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:16.502524  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:16.502535  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:16.530854  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:16.530929  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:16.586993  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:16.587030  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:16.600337  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:16.600364  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:16.669775  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:16.660570    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.661341    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.663229    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.664010    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.665709    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:16.660570    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.661341    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.663229    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.664010    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.665709    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:16.669849  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:16.669875  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:19.199141  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:19.209670  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:19.209739  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:19.242748  303437 cri.go:89] found id: ""
	I1210 07:08:19.242775  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.242784  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:19.242791  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:19.242849  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:19.266957  303437 cri.go:89] found id: ""
	I1210 07:08:19.266980  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.266989  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:19.266995  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:19.267066  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:19.293252  303437 cri.go:89] found id: ""
	I1210 07:08:19.293276  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.293285  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:19.293292  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:19.293349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:19.318070  303437 cri.go:89] found id: ""
	I1210 07:08:19.318096  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.318105  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:19.318112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:19.318171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:19.341744  303437 cri.go:89] found id: ""
	I1210 07:08:19.341769  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.341783  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:19.341789  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:19.341847  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:19.366605  303437 cri.go:89] found id: ""
	I1210 07:08:19.366632  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.366641  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:19.366648  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:19.366706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:19.393536  303437 cri.go:89] found id: ""
	I1210 07:08:19.393561  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.393570  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:19.393576  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:19.393633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:19.422513  303437 cri.go:89] found id: ""
	I1210 07:08:19.422535  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.422546  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:19.422556  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:19.422566  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:19.453046  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:19.453118  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:19.488889  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:19.488918  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:19.547224  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:19.547259  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:19.562006  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:19.562035  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:19.625530  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:19.617363    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.618299    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620102    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620530    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.622148    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:19.617363    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.618299    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620102    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620530    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.622148    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:22.125860  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:22.136477  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:22.136550  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:22.164763  303437 cri.go:89] found id: ""
	I1210 07:08:22.164786  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.164795  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:22.164801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:22.164861  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:22.190879  303437 cri.go:89] found id: ""
	I1210 07:08:22.190900  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.190909  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:22.190915  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:22.190973  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:22.215247  303437 cri.go:89] found id: ""
	I1210 07:08:22.215278  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.215286  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:22.215292  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:22.215351  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:22.239059  303437 cri.go:89] found id: ""
	I1210 07:08:22.239086  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.239095  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:22.239102  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:22.239163  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:22.264259  303437 cri.go:89] found id: ""
	I1210 07:08:22.264284  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.264293  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:22.264299  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:22.264357  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:22.289890  303437 cri.go:89] found id: ""
	I1210 07:08:22.289913  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.289923  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:22.289929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:22.289987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:22.317025  303437 cri.go:89] found id: ""
	I1210 07:08:22.317051  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.317060  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:22.317067  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:22.317124  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:22.341933  303437 cri.go:89] found id: ""
	I1210 07:08:22.341965  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.341974  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:22.341992  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:22.342004  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:22.398310  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:22.398344  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:22.413479  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:22.413520  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:22.490851  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:22.482047    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.482772    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.484530    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.485118    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.486931    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:22.482047    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.482772    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.484530    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.485118    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.486931    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:22.490873  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:22.490888  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:22.518860  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:22.518891  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:25.049142  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:25.060069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:25.060142  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:25.089203  303437 cri.go:89] found id: ""
	I1210 07:08:25.089232  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.089242  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:25.089248  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:25.089317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:25.118751  303437 cri.go:89] found id: ""
	I1210 07:08:25.118776  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.118785  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:25.118791  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:25.118848  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:25.143129  303437 cri.go:89] found id: ""
	I1210 07:08:25.143163  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.143173  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:25.143179  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:25.143240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:25.169805  303437 cri.go:89] found id: ""
	I1210 07:08:25.169830  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.169839  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:25.169846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:25.169905  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:25.194716  303437 cri.go:89] found id: ""
	I1210 07:08:25.194743  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.194752  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:25.194759  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:25.194818  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:25.221104  303437 cri.go:89] found id: ""
	I1210 07:08:25.221127  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.221135  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:25.221141  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:25.221199  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:25.249738  303437 cri.go:89] found id: ""
	I1210 07:08:25.249762  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.249771  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:25.249784  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:25.249842  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:25.273527  303437 cri.go:89] found id: ""
	I1210 07:08:25.273552  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.273562  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:25.273572  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:25.273583  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:25.298962  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:25.298996  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:25.326742  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:25.326770  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:25.381274  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:25.381307  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:25.394260  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:25.394289  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:25.485635  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:25.478200    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.479077    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.480640    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.481256    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.482290    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:25.478200    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.479077    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.480640    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.481256    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.482290    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:27.987151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:28.000081  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:28.000164  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:28.025871  303437 cri.go:89] found id: ""
	I1210 07:08:28.025896  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.025904  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:28.025917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:28.025978  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:28.050799  303437 cri.go:89] found id: ""
	I1210 07:08:28.050822  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.050831  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:28.050837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:28.050902  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:28.075890  303437 cri.go:89] found id: ""
	I1210 07:08:28.075912  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.075921  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:28.075928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:28.075988  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:28.100461  303437 cri.go:89] found id: ""
	I1210 07:08:28.100483  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.100492  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:28.100499  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:28.100555  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:28.126583  303437 cri.go:89] found id: ""
	I1210 07:08:28.126607  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.126617  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:28.126623  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:28.126682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:28.156736  303437 cri.go:89] found id: ""
	I1210 07:08:28.156758  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.156767  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:28.156774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:28.156837  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:28.181562  303437 cri.go:89] found id: ""
	I1210 07:08:28.181635  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.181657  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:28.181675  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:28.181760  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:28.206007  303437 cri.go:89] found id: ""
	I1210 07:08:28.206081  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.206106  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:28.206127  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:28.206163  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:28.219409  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:28.219445  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:28.285367  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:28.277994    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.278613    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280604    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.282182    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:28.277994    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.278613    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280604    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.282182    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:28.285387  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:28.285399  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:28.310115  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:28.310150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:28.337400  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:28.337427  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:30.895800  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:30.906215  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:30.906285  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:30.940989  303437 cri.go:89] found id: ""
	I1210 07:08:30.941016  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.941025  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:30.941031  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:30.941089  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:30.968174  303437 cri.go:89] found id: ""
	I1210 07:08:30.968196  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.968205  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:30.968211  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:30.968267  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:30.997147  303437 cri.go:89] found id: ""
	I1210 07:08:30.997181  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.997191  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:30.997198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:30.997324  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:31.027985  303437 cri.go:89] found id: ""
	I1210 07:08:31.028024  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.028033  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:31.028039  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:31.028101  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:31.052662  303437 cri.go:89] found id: ""
	I1210 07:08:31.052684  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.052693  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:31.052699  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:31.052760  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:31.078026  303437 cri.go:89] found id: ""
	I1210 07:08:31.078051  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.078060  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:31.078067  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:31.078129  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:31.106108  303437 cri.go:89] found id: ""
	I1210 07:08:31.106135  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.106144  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:31.106150  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:31.106212  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:31.133109  303437 cri.go:89] found id: ""
	I1210 07:08:31.133133  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.133141  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:31.133150  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:31.133162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:31.158330  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:31.158369  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:31.190546  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:31.190570  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:31.245193  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:31.245228  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:31.258848  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:31.258882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:31.332332  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:31.324865    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.325506    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327103    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327745    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.329226    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:31.324865    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.325506    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327103    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327745    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.329226    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:33.832563  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:33.843389  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:33.843462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:33.868588  303437 cri.go:89] found id: ""
	I1210 07:08:33.868612  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.868621  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:33.868627  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:33.868691  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:33.893467  303437 cri.go:89] found id: ""
	I1210 07:08:33.893492  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.893501  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:33.893507  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:33.893568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:33.925853  303437 cri.go:89] found id: ""
	I1210 07:08:33.925883  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.925892  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:33.925899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:33.925961  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:33.957483  303437 cri.go:89] found id: ""
	I1210 07:08:33.957507  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.957516  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:33.957523  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:33.957582  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:33.990903  303437 cri.go:89] found id: ""
	I1210 07:08:33.990927  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.990937  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:33.990943  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:33.991005  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:34.017222  303437 cri.go:89] found id: ""
	I1210 07:08:34.017249  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.017258  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:34.017264  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:34.017346  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:34.043888  303437 cri.go:89] found id: ""
	I1210 07:08:34.043913  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.043921  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:34.043928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:34.044001  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:34.069229  303437 cri.go:89] found id: ""
	I1210 07:08:34.069299  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.069314  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:34.069325  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:34.069337  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:34.127059  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:34.127093  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:34.140507  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:34.140537  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:34.205618  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:34.198206    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.198939    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200406    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200898    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.202347    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:34.198206    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.198939    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200406    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200898    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.202347    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:34.205639  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:34.205651  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:34.230228  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:34.230258  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:36.756574  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:36.768692  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:36.768761  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:36.791900  303437 cri.go:89] found id: ""
	I1210 07:08:36.791922  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.791930  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:36.791936  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:36.791994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:36.818662  303437 cri.go:89] found id: ""
	I1210 07:08:36.818683  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.818691  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:36.818697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:36.818753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:36.846695  303437 cri.go:89] found id: ""
	I1210 07:08:36.846718  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.846727  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:36.846733  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:36.846794  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:36.870384  303437 cri.go:89] found id: ""
	I1210 07:08:36.870408  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.870417  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:36.870423  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:36.870486  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:36.895312  303437 cri.go:89] found id: ""
	I1210 07:08:36.895335  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.895343  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:36.895349  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:36.895408  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:36.926574  303437 cri.go:89] found id: ""
	I1210 07:08:36.926602  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.926611  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:36.926617  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:36.926684  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:36.956760  303437 cri.go:89] found id: ""
	I1210 07:08:36.956786  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.956795  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:36.956801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:36.956864  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:36.983460  303437 cri.go:89] found id: ""
	I1210 07:08:36.983480  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.983488  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:36.983497  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:36.983512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:37.039889  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:37.039926  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:37.053431  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:37.053508  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:37.117639  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:37.109822    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.110762    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.111850    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.112355    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.113887    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:37.109822    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.110762    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.111850    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.112355    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.113887    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:37.117660  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:37.117673  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:37.148315  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:37.148357  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:39.681355  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:39.695207  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:39.695290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:39.725514  303437 cri.go:89] found id: ""
	I1210 07:08:39.725547  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.725556  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:39.725563  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:39.725632  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:39.750801  303437 cri.go:89] found id: ""
	I1210 07:08:39.750834  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.750844  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:39.750850  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:39.750920  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:39.775756  303437 cri.go:89] found id: ""
	I1210 07:08:39.775779  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.775788  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:39.775794  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:39.775853  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:39.805059  303437 cri.go:89] found id: ""
	I1210 07:08:39.805085  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.805094  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:39.805100  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:39.805158  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:39.829219  303437 cri.go:89] found id: ""
	I1210 07:08:39.829284  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.829301  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:39.829309  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:39.829371  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:39.858144  303437 cri.go:89] found id: ""
	I1210 07:08:39.858168  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.858177  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:39.858184  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:39.858243  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:39.886805  303437 cri.go:89] found id: ""
	I1210 07:08:39.886838  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.886846  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:39.886853  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:39.886919  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:39.918064  303437 cri.go:89] found id: ""
	I1210 07:08:39.918089  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.918099  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:39.918108  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:39.918119  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:39.982343  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:39.982418  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:39.995829  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:39.995854  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:40.078976  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:40.070049    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.070741    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.072500    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.073281    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.074971    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:40.070049    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.070741    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.072500    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.073281    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.074971    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:40.079001  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:40.079033  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:40.105734  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:40.105778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:42.635583  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:42.646316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:42.646387  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:42.687725  303437 cri.go:89] found id: ""
	I1210 07:08:42.687746  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.687755  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:42.687761  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:42.687821  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:42.731127  303437 cri.go:89] found id: ""
	I1210 07:08:42.731148  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.731157  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:42.731163  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:42.731224  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:42.761187  303437 cri.go:89] found id: ""
	I1210 07:08:42.761218  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.761227  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:42.761232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:42.761293  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:42.789156  303437 cri.go:89] found id: ""
	I1210 07:08:42.789184  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.789193  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:42.789200  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:42.789259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:42.813508  303437 cri.go:89] found id: ""
	I1210 07:08:42.813533  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.813542  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:42.813548  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:42.813607  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:42.838567  303437 cri.go:89] found id: ""
	I1210 07:08:42.838591  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.838601  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:42.838608  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:42.838667  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:42.862315  303437 cri.go:89] found id: ""
	I1210 07:08:42.862340  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.862348  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:42.862355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:42.862415  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:42.888411  303437 cri.go:89] found id: ""
	I1210 07:08:42.888486  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.888502  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:42.888513  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:42.888526  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:42.950009  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:42.950042  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:42.965591  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:42.965617  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:43.040631  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:43.032737    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.033256    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035076    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035768    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.037307    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:43.032737    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.033256    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035076    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035768    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.037307    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:43.040653  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:43.040667  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:43.067163  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:43.067197  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:45.596845  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:45.607484  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:45.607551  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:45.631812  303437 cri.go:89] found id: ""
	I1210 07:08:45.631841  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.631851  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:45.631857  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:45.631916  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:45.656686  303437 cri.go:89] found id: ""
	I1210 07:08:45.656709  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.656717  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:45.656724  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:45.656782  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:45.705244  303437 cri.go:89] found id: ""
	I1210 07:08:45.705270  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.705279  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:45.705286  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:45.705349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:45.733649  303437 cri.go:89] found id: ""
	I1210 07:08:45.733671  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.733679  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:45.733685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:45.733748  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:45.758319  303437 cri.go:89] found id: ""
	I1210 07:08:45.758340  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.758349  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:45.758355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:45.758416  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:45.782339  303437 cri.go:89] found id: ""
	I1210 07:08:45.782360  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.782369  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:45.782375  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:45.782434  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:45.806598  303437 cri.go:89] found id: ""
	I1210 07:08:45.806624  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.806633  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:45.806640  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:45.806700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:45.830909  303437 cri.go:89] found id: ""
	I1210 07:08:45.830933  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.830942  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:45.830951  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:45.830962  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:45.859118  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:45.859148  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:45.920835  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:45.920869  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:45.935529  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:45.935555  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:46.015051  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:46.007172    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.007866    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.009596    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.010127    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.011638    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:46.007172    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.007866    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.009596    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.010127    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.011638    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:46.015073  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:46.015086  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:48.541223  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:48.551805  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:48.551874  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:48.576818  303437 cri.go:89] found id: ""
	I1210 07:08:48.576878  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.576891  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:48.576898  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:48.576963  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:48.601980  303437 cri.go:89] found id: ""
	I1210 07:08:48.602005  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.602014  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:48.602020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:48.602082  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:48.634301  303437 cri.go:89] found id: ""
	I1210 07:08:48.634324  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.634333  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:48.634339  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:48.634399  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:48.665296  303437 cri.go:89] found id: ""
	I1210 07:08:48.665321  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.665330  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:48.665336  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:48.665395  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:48.696396  303437 cri.go:89] found id: ""
	I1210 07:08:48.696421  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.696430  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:48.696437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:48.696500  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:48.732263  303437 cri.go:89] found id: ""
	I1210 07:08:48.732288  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.732297  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:48.732304  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:48.732365  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:48.759127  303437 cri.go:89] found id: ""
	I1210 07:08:48.759152  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.759161  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:48.759170  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:48.759229  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:48.783999  303437 cri.go:89] found id: ""
	I1210 07:08:48.784077  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.784100  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:48.784116  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:48.784141  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:48.797102  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:48.797132  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:48.859523  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:48.852279    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.852826    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854371    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854816    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.856244    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:48.852279    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.852826    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854371    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854816    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.856244    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:48.859546  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:48.859560  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:48.884680  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:48.884714  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:48.923070  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:48.923098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:51.485606  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:51.496059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:51.496133  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:51.521404  303437 cri.go:89] found id: ""
	I1210 07:08:51.521429  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.521438  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:51.521444  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:51.521504  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:51.546743  303437 cri.go:89] found id: ""
	I1210 07:08:51.546768  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.546777  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:51.546785  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:51.546847  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:51.577064  303437 cri.go:89] found id: ""
	I1210 07:08:51.577089  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.577099  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:51.577105  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:51.577171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:51.602384  303437 cri.go:89] found id: ""
	I1210 07:08:51.602410  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.602420  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:51.602426  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:51.602484  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:51.630338  303437 cri.go:89] found id: ""
	I1210 07:08:51.630367  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.630375  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:51.630382  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:51.630440  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:51.660663  303437 cri.go:89] found id: ""
	I1210 07:08:51.660691  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.660700  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:51.660706  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:51.660765  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:51.689142  303437 cri.go:89] found id: ""
	I1210 07:08:51.689170  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.689179  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:51.689186  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:51.689246  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:51.723765  303437 cri.go:89] found id: ""
	I1210 07:08:51.723792  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.723800  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:51.723810  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:51.723824  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:51.781842  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:51.781873  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:51.795845  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:51.795872  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:51.863519  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:51.855577    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.856333    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858048    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858719    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.860050    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:51.855577    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.856333    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858048    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858719    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.860050    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:51.863583  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:51.863611  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:51.888478  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:51.888510  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:54.421755  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:54.432308  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:54.432377  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:54.458171  303437 cri.go:89] found id: ""
	I1210 07:08:54.458194  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.458209  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:54.458216  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:54.458279  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:54.485658  303437 cri.go:89] found id: ""
	I1210 07:08:54.485689  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.485698  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:54.485704  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:54.485763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:54.514257  303437 cri.go:89] found id: ""
	I1210 07:08:54.514279  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.514287  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:54.514294  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:54.514360  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:54.538966  303437 cri.go:89] found id: ""
	I1210 07:08:54.539053  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.539078  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:54.539096  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:54.539182  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:54.563486  303437 cri.go:89] found id: ""
	I1210 07:08:54.563512  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.563521  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:54.563528  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:54.563588  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:54.588780  303437 cri.go:89] found id: ""
	I1210 07:08:54.588805  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.588814  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:54.588827  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:54.588886  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:54.618322  303437 cri.go:89] found id: ""
	I1210 07:08:54.618346  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.618356  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:54.618362  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:54.618421  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:54.643564  303437 cri.go:89] found id: ""
	I1210 07:08:54.643592  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.643602  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:54.643612  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:54.643624  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:54.683994  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:54.684069  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:54.743900  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:54.743934  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:54.757240  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:54.757266  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:54.820795  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:54.813522    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.813935    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.815612    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.816020    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.817550    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:54.813522    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.813935    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.815612    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.816020    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.817550    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:54.820815  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:54.820830  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:57.345608  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:57.358499  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:57.358625  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:57.384563  303437 cri.go:89] found id: ""
	I1210 07:08:57.384589  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.384598  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:57.384604  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:57.384682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:57.408236  303437 cri.go:89] found id: ""
	I1210 07:08:57.408263  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.408272  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:57.408279  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:57.408337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:57.432014  303437 cri.go:89] found id: ""
	I1210 07:08:57.432037  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.432045  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:57.432052  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:57.432111  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:57.455970  303437 cri.go:89] found id: ""
	I1210 07:08:57.456046  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.456068  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:57.456088  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:57.456173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:57.480680  303437 cri.go:89] found id: ""
	I1210 07:08:57.480752  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.480767  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:57.480775  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:57.480841  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:57.505993  303437 cri.go:89] found id: ""
	I1210 07:08:57.506026  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.506037  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:57.506043  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:57.506153  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:57.530713  303437 cri.go:89] found id: ""
	I1210 07:08:57.530739  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.530748  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:57.530754  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:57.530814  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:57.555806  303437 cri.go:89] found id: ""
	I1210 07:08:57.555871  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.555897  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:57.555918  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:57.555943  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:57.611292  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:57.611326  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:57.624707  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:57.624735  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:57.707745  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:57.699963    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.701079    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702632    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702942    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.704373    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:57.699963    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.701079    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702632    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702942    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.704373    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:57.707768  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:57.707780  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:57.734701  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:57.734734  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:00.266582  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:00.305476  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:00.305924  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:00.366724  303437 cri.go:89] found id: ""
	I1210 07:09:00.366806  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.366839  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:00.366879  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:00.366992  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:00.396827  303437 cri.go:89] found id: ""
	I1210 07:09:00.396912  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.396939  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:00.396960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:00.397064  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:00.424504  303437 cri.go:89] found id: ""
	I1210 07:09:00.424531  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.424540  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:00.424547  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:00.424609  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:00.453893  303437 cri.go:89] found id: ""
	I1210 07:09:00.453921  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.453931  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:00.453938  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:00.454001  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:00.480406  303437 cri.go:89] found id: ""
	I1210 07:09:00.480432  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.480441  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:00.480448  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:00.480508  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:00.505747  303437 cri.go:89] found id: ""
	I1210 07:09:00.505779  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.505788  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:00.505795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:00.505856  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:00.535288  303437 cri.go:89] found id: ""
	I1210 07:09:00.535311  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.535320  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:00.535326  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:00.535387  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:00.565945  303437 cri.go:89] found id: ""
	I1210 07:09:00.565972  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.565989  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:00.566015  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:00.566034  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:00.596202  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:00.596228  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:00.651714  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:00.651748  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:00.666338  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:00.666375  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:00.745706  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:00.737632    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.738139    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.739647    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.740156    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.741940    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:00.737632    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.738139    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.739647    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.740156    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.741940    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:00.745728  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:00.745742  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:03.272316  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:03.283628  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:03.283695  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:03.309180  303437 cri.go:89] found id: ""
	I1210 07:09:03.309263  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.309285  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:03.309300  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:03.309373  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:03.334971  303437 cri.go:89] found id: ""
	I1210 07:09:03.334994  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.335003  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:03.335035  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:03.335096  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:03.361090  303437 cri.go:89] found id: ""
	I1210 07:09:03.361116  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.361125  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:03.361131  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:03.361189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:03.385067  303437 cri.go:89] found id: ""
	I1210 07:09:03.385141  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.385161  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:03.385169  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:03.385259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:03.420428  303437 cri.go:89] found id: ""
	I1210 07:09:03.420450  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.420459  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:03.420465  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:03.420527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:03.453131  303437 cri.go:89] found id: ""
	I1210 07:09:03.453153  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.453162  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:03.453168  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:03.453281  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:03.485206  303437 cri.go:89] found id: ""
	I1210 07:09:03.485236  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.485245  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:03.485251  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:03.485311  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:03.517204  303437 cri.go:89] found id: ""
	I1210 07:09:03.517229  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.517238  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:03.517253  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:03.517265  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:03.530656  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:03.530728  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:03.596244  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:03.588660    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.589167    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.590688    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.591215    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.592799    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:03.588660    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.589167    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.590688    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.591215    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.592799    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:03.596305  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:03.596342  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:03.621847  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:03.621882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:03.649988  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:03.650024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:06.209516  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:06.219893  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:06.219970  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:06.244763  303437 cri.go:89] found id: ""
	I1210 07:09:06.244786  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.244795  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:06.244801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:06.244862  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:06.271479  303437 cri.go:89] found id: ""
	I1210 07:09:06.271501  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.271509  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:06.271515  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:06.271572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:06.295607  303437 cri.go:89] found id: ""
	I1210 07:09:06.295635  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.295644  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:06.295651  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:06.295706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:06.320774  303437 cri.go:89] found id: ""
	I1210 07:09:06.320798  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.320806  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:06.320823  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:06.320886  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:06.349033  303437 cri.go:89] found id: ""
	I1210 07:09:06.349056  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.349064  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:06.349070  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:06.349127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:06.377330  303437 cri.go:89] found id: ""
	I1210 07:09:06.377352  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.377361  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:06.377367  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:06.377426  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:06.400983  303437 cri.go:89] found id: ""
	I1210 07:09:06.401005  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.401014  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:06.401021  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:06.401080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:06.431299  303437 cri.go:89] found id: ""
	I1210 07:09:06.431327  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.431336  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:06.431345  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:06.431356  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:06.462335  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:06.462369  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:06.495348  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:06.495376  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:06.551592  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:06.551627  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:06.565270  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:06.565305  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:06.629933  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:06.621965    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.622716    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.624429    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.625124    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.626708    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:06.621965    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.622716    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.624429    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.625124    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.626708    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:09.131098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:09.141585  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:09.141658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:09.168859  303437 cri.go:89] found id: ""
	I1210 07:09:09.168889  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.168898  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:09.168904  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:09.168966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:09.193427  303437 cri.go:89] found id: ""
	I1210 07:09:09.193448  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.193457  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:09.193463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:09.193520  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:09.217804  303437 cri.go:89] found id: ""
	I1210 07:09:09.217928  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.217954  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:09.217975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:09.218083  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:09.242204  303437 cri.go:89] found id: ""
	I1210 07:09:09.242277  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.242303  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:09.242322  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:09.242404  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:09.268889  303437 cri.go:89] found id: ""
	I1210 07:09:09.268912  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.268920  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:09.268926  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:09.268984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:09.293441  303437 cri.go:89] found id: ""
	I1210 07:09:09.293514  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.293545  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:09.293563  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:09.293671  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:09.321925  303437 cri.go:89] found id: ""
	I1210 07:09:09.321946  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.321954  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:09.321960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:09.322026  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:09.350603  303437 cri.go:89] found id: ""
	I1210 07:09:09.350623  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.350631  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:09.350641  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:09.350653  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:09.363382  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:09.363409  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:09.429669  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:09.421586    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.422246    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424200    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424743    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.426494    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:09.421586    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.422246    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424200    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424743    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.426494    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:09.429690  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:09.429702  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:09.461410  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:09.461444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:09.500508  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:09.500536  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:12.055555  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:12.066220  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:12.066289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:12.093446  303437 cri.go:89] found id: ""
	I1210 07:09:12.093468  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.093477  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:12.093484  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:12.093543  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:12.119338  303437 cri.go:89] found id: ""
	I1210 07:09:12.119361  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.119370  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:12.119376  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:12.119436  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:12.146532  303437 cri.go:89] found id: ""
	I1210 07:09:12.146553  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.146562  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:12.146568  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:12.146623  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:12.175977  303437 cri.go:89] found id: ""
	I1210 07:09:12.175999  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.176007  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:12.176013  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:12.176072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:12.200557  303437 cri.go:89] found id: ""
	I1210 07:09:12.200579  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.200588  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:12.200595  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:12.200651  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:12.224652  303437 cri.go:89] found id: ""
	I1210 07:09:12.224674  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.224684  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:12.224690  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:12.224750  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:12.249147  303437 cri.go:89] found id: ""
	I1210 07:09:12.249171  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.249180  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:12.249187  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:12.249253  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:12.272500  303437 cri.go:89] found id: ""
	I1210 07:09:12.272535  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.272543  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:12.272553  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:12.272580  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:12.328368  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:12.328399  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:12.341669  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:12.341699  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:12.401653  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:12.394790    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.395266    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396400    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396898    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.398538    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:12.394790    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.395266    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396400    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396898    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.398538    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:12.401708  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:12.401734  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:12.431751  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:12.431791  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:14.963924  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:14.974138  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:14.974206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:15.001054  303437 cri.go:89] found id: ""
	I1210 07:09:15.001080  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.001089  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:15.001097  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:15.001170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:15.040020  303437 cri.go:89] found id: ""
	I1210 07:09:15.040044  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.040053  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:15.040059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:15.040121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:15.065063  303437 cri.go:89] found id: ""
	I1210 07:09:15.065086  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.065095  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:15.065101  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:15.065161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:15.089689  303437 cri.go:89] found id: ""
	I1210 07:09:15.089714  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.089723  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:15.089729  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:15.089797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:15.117422  303437 cri.go:89] found id: ""
	I1210 07:09:15.117446  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.117455  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:15.117462  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:15.117521  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:15.143475  303437 cri.go:89] found id: ""
	I1210 07:09:15.143498  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.143507  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:15.143514  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:15.143580  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:15.168329  303437 cri.go:89] found id: ""
	I1210 07:09:15.168353  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.168363  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:15.168370  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:15.168439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:15.196848  303437 cri.go:89] found id: ""
	I1210 07:09:15.196870  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.196879  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:15.196889  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:15.196901  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:15.210071  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:15.210098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:15.270835  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:15.262938    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.263645    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265180    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265486    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.267063    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:15.262938    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.263645    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265180    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265486    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.267063    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:15.270858  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:15.270870  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:15.296738  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:15.296774  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:15.322760  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:15.322786  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:17.877564  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:17.887770  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:17.887840  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:17.923653  303437 cri.go:89] found id: ""
	I1210 07:09:17.923691  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.923701  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:17.923708  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:17.923789  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:17.953013  303437 cri.go:89] found id: ""
	I1210 07:09:17.953058  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.953067  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:17.953073  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:17.953155  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:17.987520  303437 cri.go:89] found id: ""
	I1210 07:09:17.987565  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.987574  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:17.987587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:17.987655  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:18.017344  303437 cri.go:89] found id: ""
	I1210 07:09:18.017367  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.017378  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:18.017385  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:18.017448  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:18.043560  303437 cri.go:89] found id: ""
	I1210 07:09:18.043592  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.043602  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:18.043609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:18.043670  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:18.071253  303437 cri.go:89] found id: ""
	I1210 07:09:18.071299  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.071308  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:18.071317  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:18.071395  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:18.100328  303437 cri.go:89] found id: ""
	I1210 07:09:18.100350  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.100359  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:18.100364  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:18.100422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:18.124828  303437 cri.go:89] found id: ""
	I1210 07:09:18.124855  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.124864  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:18.124873  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:18.124906  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:18.180441  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:18.180473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:18.193811  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:18.193838  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:18.254675  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:18.247379    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.248083    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.249676    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.250042    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.251523    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:18.247379    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.248083    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.249676    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.250042    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.251523    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:18.254700  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:18.254720  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:18.280133  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:18.280167  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:20.813863  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:20.824103  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:20.824175  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:20.847793  303437 cri.go:89] found id: ""
	I1210 07:09:20.847818  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.847827  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:20.847833  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:20.847896  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:20.873295  303437 cri.go:89] found id: ""
	I1210 07:09:20.873319  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.873328  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:20.873334  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:20.873394  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:20.897570  303437 cri.go:89] found id: ""
	I1210 07:09:20.897594  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.897603  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:20.897609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:20.897665  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:20.932999  303437 cri.go:89] found id: ""
	I1210 07:09:20.933025  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.933034  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:20.933041  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:20.933099  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:20.967096  303437 cri.go:89] found id: ""
	I1210 07:09:20.967123  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.967137  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:20.967143  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:20.967203  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:20.994239  303437 cri.go:89] found id: ""
	I1210 07:09:20.994265  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.994274  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:20.994281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:20.994337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:21.020205  303437 cri.go:89] found id: ""
	I1210 07:09:21.020230  303437 logs.go:282] 0 containers: []
	W1210 07:09:21.020238  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:21.020245  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:21.020305  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:21.049401  303437 cri.go:89] found id: ""
	I1210 07:09:21.049427  303437 logs.go:282] 0 containers: []
	W1210 07:09:21.049436  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:21.049445  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:21.049457  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:21.062901  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:21.062926  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:21.122517  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:21.115640    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.116120    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117591    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117985    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.119485    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:21.115640    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.116120    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117591    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117985    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.119485    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:21.122537  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:21.122550  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:21.147196  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:21.147230  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:21.177192  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:21.177221  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:23.732133  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:23.742890  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:23.742961  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:23.774220  303437 cri.go:89] found id: ""
	I1210 07:09:23.774243  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.774251  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:23.774257  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:23.774317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:23.798816  303437 cri.go:89] found id: ""
	I1210 07:09:23.798837  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.798846  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:23.798852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:23.798910  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:23.823244  303437 cri.go:89] found id: ""
	I1210 07:09:23.823318  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.823341  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:23.823362  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:23.823453  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:23.851474  303437 cri.go:89] found id: ""
	I1210 07:09:23.851500  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.851510  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:23.851516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:23.851598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:23.876565  303437 cri.go:89] found id: ""
	I1210 07:09:23.876641  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.876665  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:23.876679  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:23.876753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:23.901598  303437 cri.go:89] found id: ""
	I1210 07:09:23.901624  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.901632  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:23.901641  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:23.901698  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:23.939880  303437 cri.go:89] found id: ""
	I1210 07:09:23.945774  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.945837  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:23.945917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:23.946105  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:23.983936  303437 cri.go:89] found id: ""
	I1210 07:09:23.984019  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.984045  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:23.984096  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:23.984128  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:24.047417  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:24.047454  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:24.060782  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:24.060808  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:24.123547  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:24.115234    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.115940    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.117664    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.118067    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.119622    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:24.115234    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.115940    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.117664    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.118067    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.119622    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:24.123570  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:24.123583  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:24.148767  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:24.148802  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:26.679138  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:26.691239  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:26.691311  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:26.720725  303437 cri.go:89] found id: ""
	I1210 07:09:26.720748  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.720756  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:26.720763  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:26.720824  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:26.745903  303437 cri.go:89] found id: ""
	I1210 07:09:26.745926  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.745935  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:26.745941  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:26.745999  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:26.771250  303437 cri.go:89] found id: ""
	I1210 07:09:26.771279  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.771289  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:26.771295  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:26.771354  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:26.795771  303437 cri.go:89] found id: ""
	I1210 07:09:26.795795  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.795804  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:26.795810  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:26.795912  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:26.820992  303437 cri.go:89] found id: ""
	I1210 07:09:26.821013  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.821023  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:26.821029  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:26.821091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:26.849537  303437 cri.go:89] found id: ""
	I1210 07:09:26.849559  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.849568  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:26.849575  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:26.849631  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:26.882245  303437 cri.go:89] found id: ""
	I1210 07:09:26.882274  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.882284  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:26.882290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:26.882354  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:26.907397  303437 cri.go:89] found id: ""
	I1210 07:09:26.907421  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.907437  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:26.907446  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:26.907457  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:26.945593  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:26.945619  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:27.009478  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:27.009515  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:27.023242  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:27.023268  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:27.088362  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:27.080378    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.081206    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.082792    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.083225    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.084935    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:27.080378    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.081206    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.082792    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.083225    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.084935    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:27.088384  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:27.088396  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:29.614457  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:29.624717  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:29.624839  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:29.648905  303437 cri.go:89] found id: ""
	I1210 07:09:29.648929  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.648938  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:29.648944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:29.649031  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:29.693513  303437 cri.go:89] found id: ""
	I1210 07:09:29.693576  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.693597  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:29.693615  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:29.693703  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:29.718997  303437 cri.go:89] found id: ""
	I1210 07:09:29.719090  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.719114  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:29.719132  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:29.719215  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:29.749199  303437 cri.go:89] found id: ""
	I1210 07:09:29.749266  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.749289  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:29.749307  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:29.749402  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:29.774719  303437 cri.go:89] found id: ""
	I1210 07:09:29.774795  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.774819  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:29.774841  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:29.774931  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:29.799913  303437 cri.go:89] found id: ""
	I1210 07:09:29.799977  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.799999  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:29.800017  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:29.800095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:29.823673  303437 cri.go:89] found id: ""
	I1210 07:09:29.823747  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.823769  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:29.823787  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:29.823859  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:29.848157  303437 cri.go:89] found id: ""
	I1210 07:09:29.848188  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.848198  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:29.848208  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:29.848219  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:29.876009  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:29.876037  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:29.932276  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:29.932307  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:29.949872  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:29.949898  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:30.045838  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:30.034181    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.034859    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037026    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037737    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.040198    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:30.034181    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.034859    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037026    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037737    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.040198    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:30.045873  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:30.045888  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:32.576040  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:32.587217  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:32.587298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:32.613690  303437 cri.go:89] found id: ""
	I1210 07:09:32.613713  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.613722  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:32.613729  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:32.613797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:32.639153  303437 cri.go:89] found id: ""
	I1210 07:09:32.639178  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.639187  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:32.639193  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:32.639256  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:32.673727  303437 cri.go:89] found id: ""
	I1210 07:09:32.673799  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.673808  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:32.673815  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:32.673882  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:32.709195  303437 cri.go:89] found id: ""
	I1210 07:09:32.709222  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.709231  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:32.709238  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:32.709298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:32.737425  303437 cri.go:89] found id: ""
	I1210 07:09:32.737458  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.737467  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:32.737474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:32.737532  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:32.766042  303437 cri.go:89] found id: ""
	I1210 07:09:32.766069  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.766078  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:32.766086  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:32.766145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:32.791060  303437 cri.go:89] found id: ""
	I1210 07:09:32.791089  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.791098  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:32.791104  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:32.791164  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:32.815424  303437 cri.go:89] found id: ""
	I1210 07:09:32.815445  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.815453  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:32.815462  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:32.815473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:32.845676  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:32.845718  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:32.877898  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:32.877927  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:32.934870  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:32.934903  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:32.950436  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:32.950516  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:33.023900  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:33.014878    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.015644    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.017336    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.018088    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.019810    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:33.014878    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.015644    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.017336    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.018088    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.019810    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:35.524178  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:35.535098  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:35.535173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:35.563582  303437 cri.go:89] found id: ""
	I1210 07:09:35.563606  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.563614  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:35.563621  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:35.563682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:35.589346  303437 cri.go:89] found id: ""
	I1210 07:09:35.589368  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.589377  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:35.589384  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:35.589442  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:35.613807  303437 cri.go:89] found id: ""
	I1210 07:09:35.613833  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.613841  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:35.613848  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:35.613907  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:35.643139  303437 cri.go:89] found id: ""
	I1210 07:09:35.643162  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.643172  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:35.643178  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:35.643240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:35.682597  303437 cri.go:89] found id: ""
	I1210 07:09:35.682629  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.682638  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:35.682645  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:35.682711  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:35.716718  303437 cri.go:89] found id: ""
	I1210 07:09:35.716739  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.716747  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:35.716753  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:35.716811  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:35.746357  303437 cri.go:89] found id: ""
	I1210 07:09:35.746378  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.746387  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:35.746393  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:35.746455  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:35.773219  303437 cri.go:89] found id: ""
	I1210 07:09:35.773240  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.773251  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:35.773260  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:35.773273  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:35.838850  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:35.830993    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.831530    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833193    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833865    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.835553    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:35.830993    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.831530    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833193    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833865    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.835553    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:35.838868  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:35.838882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:35.864265  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:35.864299  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:35.892689  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:35.892716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:35.952281  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:35.952311  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:38.468021  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:38.478500  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:38.478574  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:38.505131  303437 cri.go:89] found id: ""
	I1210 07:09:38.505156  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.505174  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:38.505197  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:38.505267  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:38.529142  303437 cri.go:89] found id: ""
	I1210 07:09:38.529166  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.529175  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:38.529181  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:38.529239  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:38.554410  303437 cri.go:89] found id: ""
	I1210 07:09:38.554434  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.554442  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:38.554449  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:38.554506  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:38.581372  303437 cri.go:89] found id: ""
	I1210 07:09:38.581395  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.581403  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:38.581409  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:38.581472  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:38.606157  303437 cri.go:89] found id: ""
	I1210 07:09:38.606182  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.606191  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:38.606198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:38.606261  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:38.630691  303437 cri.go:89] found id: ""
	I1210 07:09:38.630717  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.630725  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:38.630731  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:38.630788  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:38.655423  303437 cri.go:89] found id: ""
	I1210 07:09:38.655447  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.655456  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:38.655463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:38.655524  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:38.685788  303437 cri.go:89] found id: ""
	I1210 07:09:38.685814  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.685822  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:38.685832  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:38.685844  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:38.750704  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:38.750740  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:38.764389  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:38.764417  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:38.825803  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:38.818667    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.819289    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.820727    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.821107    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.822534    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:38.818667    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.819289    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.820727    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.821107    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.822534    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:38.825824  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:38.825836  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:38.850907  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:38.850941  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:41.382590  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:41.392996  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:41.393069  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:41.417044  303437 cri.go:89] found id: ""
	I1210 07:09:41.417069  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.417077  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:41.417083  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:41.417146  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:41.442003  303437 cri.go:89] found id: ""
	I1210 07:09:41.442077  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.442107  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:41.442127  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:41.442200  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:41.466958  303437 cri.go:89] found id: ""
	I1210 07:09:41.466985  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.466994  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:41.467000  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:41.467081  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:41.491996  303437 cri.go:89] found id: ""
	I1210 07:09:41.492018  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.492027  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:41.492033  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:41.492093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:41.517865  303437 cri.go:89] found id: ""
	I1210 07:09:41.517890  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.517908  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:41.517929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:41.518012  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:41.544162  303437 cri.go:89] found id: ""
	I1210 07:09:41.544184  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.544193  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:41.544199  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:41.544259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:41.573308  303437 cri.go:89] found id: ""
	I1210 07:09:41.573381  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.573404  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:41.573422  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:41.573502  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:41.602427  303437 cri.go:89] found id: ""
	I1210 07:09:41.602457  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.602467  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:41.602492  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:41.602511  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:41.658769  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:41.658803  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:41.681233  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:41.681259  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:41.747373  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:41.738699    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.739334    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.741375    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.742059    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.744132    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:41.738699    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.739334    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.741375    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.742059    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.744132    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:41.747398  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:41.747411  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:41.772193  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:41.772224  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:44.302640  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:44.313058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:44.313127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:44.341886  303437 cri.go:89] found id: ""
	I1210 07:09:44.341914  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.341929  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:44.341935  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:44.341995  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:44.367439  303437 cri.go:89] found id: ""
	I1210 07:09:44.367460  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.367469  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:44.367475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:44.367532  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:44.391640  303437 cri.go:89] found id: ""
	I1210 07:09:44.391668  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.391678  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:44.391685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:44.391780  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:44.421140  303437 cri.go:89] found id: ""
	I1210 07:09:44.421169  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.421178  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:44.421185  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:44.421263  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:44.444759  303437 cri.go:89] found id: ""
	I1210 07:09:44.444783  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.444792  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:44.444798  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:44.444858  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:44.468926  303437 cri.go:89] found id: ""
	I1210 07:09:44.468959  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.468968  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:44.468978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:44.469045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:44.495556  303437 cri.go:89] found id: ""
	I1210 07:09:44.495581  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.495590  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:44.495597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:44.495676  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:44.519631  303437 cri.go:89] found id: ""
	I1210 07:09:44.519654  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.519663  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:44.519672  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:44.519684  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:44.532940  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:44.532964  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:44.598861  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:44.590948    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.591655    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593344    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593846    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.595521    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:44.590948    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.591655    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593344    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593846    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.595521    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:44.598921  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:44.598950  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:44.624141  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:44.624181  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:44.651186  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:44.651214  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:47.208206  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:47.218613  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:47.218695  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:47.244616  303437 cri.go:89] found id: ""
	I1210 07:09:47.244643  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.244652  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:47.244659  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:47.244717  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:47.270353  303437 cri.go:89] found id: ""
	I1210 07:09:47.270378  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.270387  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:47.270393  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:47.270469  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:47.296082  303437 cri.go:89] found id: ""
	I1210 07:09:47.296108  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.296117  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:47.296123  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:47.296181  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:47.320296  303437 cri.go:89] found id: ""
	I1210 07:09:47.320362  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.320380  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:47.320388  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:47.320459  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:47.345546  303437 cri.go:89] found id: ""
	I1210 07:09:47.345571  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.345580  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:47.345587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:47.345647  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:47.375423  303437 cri.go:89] found id: ""
	I1210 07:09:47.375458  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.375467  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:47.375475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:47.375536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:47.399857  303437 cri.go:89] found id: ""
	I1210 07:09:47.399880  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.399894  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:47.399901  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:47.399963  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:47.431984  303437 cri.go:89] found id: ""
	I1210 07:09:47.432011  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.432019  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:47.432029  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:47.432060  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:47.458214  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:47.458248  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:47.490816  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:47.490843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:47.549328  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:47.549361  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:47.562826  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:47.562855  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:47.624764  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:47.617028    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.617678    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619303    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619812    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.621440    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:47.617028    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.617678    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619303    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619812    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.621440    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:50.125980  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:50.136223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:50.136289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:50.169825  303437 cri.go:89] found id: ""
	I1210 07:09:50.169858  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.169867  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:50.169874  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:50.169966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:50.198977  303437 cri.go:89] found id: ""
	I1210 07:09:50.199000  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.199031  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:50.199039  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:50.199095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:50.235780  303437 cri.go:89] found id: ""
	I1210 07:09:50.235803  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.235811  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:50.235817  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:50.235875  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:50.259548  303437 cri.go:89] found id: ""
	I1210 07:09:50.259570  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.259578  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:50.259585  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:50.259641  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:50.285338  303437 cri.go:89] found id: ""
	I1210 07:09:50.285361  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.285369  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:50.285375  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:50.285432  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:50.310647  303437 cri.go:89] found id: ""
	I1210 07:09:50.310669  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.310678  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:50.310685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:50.310741  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:50.334419  303437 cri.go:89] found id: ""
	I1210 07:09:50.334448  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.334458  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:50.334464  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:50.334521  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:50.359803  303437 cri.go:89] found id: ""
	I1210 07:09:50.359827  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.359837  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:50.359847  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:50.359858  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:50.384958  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:50.384994  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:50.421068  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:50.421093  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:50.477375  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:50.477409  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:50.490923  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:50.490954  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:50.556587  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:50.548374    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.549044    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.550820    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.551415    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.553008    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:50.548374    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.549044    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.550820    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.551415    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.553008    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:53.056876  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:53.067392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:53.067464  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:53.092029  303437 cri.go:89] found id: ""
	I1210 07:09:53.092052  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.092062  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:53.092068  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:53.092125  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:53.118131  303437 cri.go:89] found id: ""
	I1210 07:09:53.118156  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.118165  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:53.118172  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:53.118232  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:53.147375  303437 cri.go:89] found id: ""
	I1210 07:09:53.147398  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.147407  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:53.147413  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:53.147471  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:53.184782  303437 cri.go:89] found id: ""
	I1210 07:09:53.184801  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.184810  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:53.184816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:53.184875  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:53.211867  303437 cri.go:89] found id: ""
	I1210 07:09:53.211892  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.211901  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:53.211908  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:53.211965  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:53.237656  303437 cri.go:89] found id: ""
	I1210 07:09:53.237678  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.237686  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:53.237693  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:53.237761  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:53.262840  303437 cri.go:89] found id: ""
	I1210 07:09:53.262861  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.262870  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:53.262876  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:53.262934  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:53.287214  303437 cri.go:89] found id: ""
	I1210 07:09:53.287235  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.287243  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:53.287252  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:53.287265  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:53.316241  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:53.316267  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:53.371646  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:53.371682  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:53.384755  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:53.384788  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:53.447921  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:53.440066    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.440752    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442394    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442882    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.444521    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:53.440066    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.440752    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442394    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442882    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.444521    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:53.447948  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:53.447961  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:55.973173  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:55.983576  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:55.983656  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:56.011801  303437 cri.go:89] found id: ""
	I1210 07:09:56.011830  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.011840  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:56.011851  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:56.011968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:56.038072  303437 cri.go:89] found id: ""
	I1210 07:09:56.038104  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.038114  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:56.038120  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:56.038198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:56.068512  303437 cri.go:89] found id: ""
	I1210 07:09:56.068586  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.068610  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:56.068629  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:56.068716  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:56.094431  303437 cri.go:89] found id: ""
	I1210 07:09:56.094462  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.094471  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:56.094478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:56.094550  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:56.120840  303437 cri.go:89] found id: ""
	I1210 07:09:56.120865  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.120875  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:56.120881  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:56.120957  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:56.145302  303437 cri.go:89] found id: ""
	I1210 07:09:56.145335  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.145344  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:56.145350  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:56.145415  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:56.177802  303437 cri.go:89] found id: ""
	I1210 07:09:56.177828  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.177837  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:56.177843  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:56.177903  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:56.217508  303437 cri.go:89] found id: ""
	I1210 07:09:56.217535  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.217544  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:56.217553  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:56.217565  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:56.236388  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:56.236414  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:56.299818  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:56.290345    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.291927    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.293053    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.294824    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.295281    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:56.290345    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.291927    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.293053    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.294824    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.295281    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:56.299836  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:56.299849  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:56.324241  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:56.324274  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:56.351770  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:56.351798  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:58.907151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:58.920281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:58.920355  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:58.951789  303437 cri.go:89] found id: ""
	I1210 07:09:58.951887  303437 logs.go:282] 0 containers: []
	W1210 07:09:58.951924  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:58.951955  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:58.952033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:58.988101  303437 cri.go:89] found id: ""
	I1210 07:09:58.988174  303437 logs.go:282] 0 containers: []
	W1210 07:09:58.988200  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:58.988214  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:58.988289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:59.015007  303437 cri.go:89] found id: ""
	I1210 07:09:59.015061  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.015070  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:59.015076  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:59.015145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:59.041267  303437 cri.go:89] found id: ""
	I1210 07:09:59.041290  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.041299  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:59.041305  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:59.041364  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:59.065295  303437 cri.go:89] found id: ""
	I1210 07:09:59.065317  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.065325  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:59.065332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:59.065389  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:59.090688  303437 cri.go:89] found id: ""
	I1210 07:09:59.090710  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.090719  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:59.090735  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:59.090796  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:59.123411  303437 cri.go:89] found id: ""
	I1210 07:09:59.123433  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.123442  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:59.123448  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:59.123507  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:59.148970  303437 cri.go:89] found id: ""
	I1210 07:09:59.148995  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.149003  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:59.149013  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:59.149024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:59.213078  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:59.213112  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:59.229582  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:59.229610  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:59.291341  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:59.283620    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.284364    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.285965    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.286418    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.288009    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:59.283620    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.284364    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.285965    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.286418    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.288009    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:59.291371  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:59.291383  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:59.316302  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:59.316335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:01.843334  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:01.854638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:01.854715  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:01.880761  303437 cri.go:89] found id: ""
	I1210 07:10:01.880783  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.880792  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:01.880802  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:01.880863  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:01.910547  303437 cri.go:89] found id: ""
	I1210 07:10:01.910582  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.910591  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:01.910597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:01.910659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:01.946840  303437 cri.go:89] found id: ""
	I1210 07:10:01.946868  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.946878  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:01.946885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:01.946947  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:01.978924  303437 cri.go:89] found id: ""
	I1210 07:10:01.978961  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.978970  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:01.978976  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:01.979080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:02.019488  303437 cri.go:89] found id: ""
	I1210 07:10:02.019517  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.019536  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:02.019543  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:02.019630  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:02.046286  303437 cri.go:89] found id: ""
	I1210 07:10:02.046307  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.046319  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:02.046325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:02.046390  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:02.072527  303437 cri.go:89] found id: ""
	I1210 07:10:02.072552  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.072562  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:02.072568  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:02.072631  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:02.097399  303437 cri.go:89] found id: ""
	I1210 07:10:02.097421  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.097430  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:02.097440  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:02.097451  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:02.158615  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:02.158651  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:02.174600  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:02.174685  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:02.250555  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:02.241608    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.242681    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.244544    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.245035    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.246871    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:02.241608    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.242681    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.244544    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.245035    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.246871    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:02.250577  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:02.250590  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:02.276945  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:02.276982  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:04.815961  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:04.826415  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:04.826482  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:04.851192  303437 cri.go:89] found id: ""
	I1210 07:10:04.851217  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.851226  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:04.851233  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:04.851295  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:04.880601  303437 cri.go:89] found id: ""
	I1210 07:10:04.880623  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.880632  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:04.880639  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:04.880700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:04.910922  303437 cri.go:89] found id: ""
	I1210 07:10:04.910944  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.910954  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:04.910960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:04.911053  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:04.945097  303437 cri.go:89] found id: ""
	I1210 07:10:04.945122  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.945131  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:04.945137  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:04.945198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:04.976739  303437 cri.go:89] found id: ""
	I1210 07:10:04.976759  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.976768  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:04.976774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:04.976828  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:05.004094  303437 cri.go:89] found id: ""
	I1210 07:10:05.004126  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.004136  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:05.004143  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:05.004221  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:05.031557  303437 cri.go:89] found id: ""
	I1210 07:10:05.031582  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.031591  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:05.031598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:05.031660  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:05.057223  303437 cri.go:89] found id: ""
	I1210 07:10:05.057245  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.057254  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:05.057264  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:05.057277  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:05.070835  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:05.070868  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:05.134682  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:05.126987    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.127646    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129140    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129598    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.131280    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:05.126987    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.127646    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129140    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129598    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.131280    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:05.134701  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:05.134713  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:05.161896  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:05.161984  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:05.199637  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:05.199661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:07.763534  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:07.773915  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:07.773983  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:07.800754  303437 cri.go:89] found id: ""
	I1210 07:10:07.800778  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.800788  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:07.800794  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:07.800856  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:07.826430  303437 cri.go:89] found id: ""
	I1210 07:10:07.826453  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.826462  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:07.826468  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:07.826527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:07.850496  303437 cri.go:89] found id: ""
	I1210 07:10:07.850517  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.850528  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:07.850534  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:07.850592  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:07.875524  303437 cri.go:89] found id: ""
	I1210 07:10:07.875546  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.875555  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:07.875561  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:07.875622  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:07.905072  303437 cri.go:89] found id: ""
	I1210 07:10:07.905094  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.905103  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:07.905109  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:07.905189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:07.936426  303437 cri.go:89] found id: ""
	I1210 07:10:07.936449  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.936457  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:07.936464  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:07.936527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:07.973539  303437 cri.go:89] found id: ""
	I1210 07:10:07.973618  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.973640  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:07.973659  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:07.973772  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:07.999823  303437 cri.go:89] found id: ""
	I1210 07:10:07.999914  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.999941  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:07.999964  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:08.000003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:08.068982  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:08.060765    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062197    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062643    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064190    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064500    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:08.060765    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062197    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062643    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064190    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064500    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:08.069056  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:08.069079  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:08.094318  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:08.094351  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:08.122292  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:08.122320  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:08.184455  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:08.184505  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:10.701562  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:10.711949  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:10.712015  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:10.737041  303437 cri.go:89] found id: ""
	I1210 07:10:10.737068  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.737078  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:10.737085  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:10.737152  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:10.766737  303437 cri.go:89] found id: ""
	I1210 07:10:10.766759  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.766769  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:10.766775  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:10.766833  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:10.795664  303437 cri.go:89] found id: ""
	I1210 07:10:10.795689  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.795698  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:10.795705  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:10.795763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:10.819880  303437 cri.go:89] found id: ""
	I1210 07:10:10.819908  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.819917  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:10.819924  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:10.819986  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:10.843991  303437 cri.go:89] found id: ""
	I1210 07:10:10.844028  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.844037  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:10.844043  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:10.844121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:10.868988  303437 cri.go:89] found id: ""
	I1210 07:10:10.869010  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.869019  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:10.869025  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:10.869088  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:10.893331  303437 cri.go:89] found id: ""
	I1210 07:10:10.893361  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.893371  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:10.893392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:10.893473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:10.925989  303437 cri.go:89] found id: ""
	I1210 07:10:10.926016  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.926025  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:10.926034  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:10.926045  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:10.951381  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:10.951417  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:10.992523  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:10.992547  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:11.048715  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:11.048751  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:11.062864  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:11.062892  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:11.126862  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:11.117747    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.118147    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.119641    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.120260    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.122142    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:11.117747    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.118147    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.119641    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.120260    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.122142    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:13.627173  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:13.640121  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:13.640189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:13.666074  303437 cri.go:89] found id: ""
	I1210 07:10:13.666097  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.666106  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:13.666112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:13.666172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:13.694979  303437 cri.go:89] found id: ""
	I1210 07:10:13.695001  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.695043  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:13.695051  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:13.695110  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:13.719004  303437 cri.go:89] found id: ""
	I1210 07:10:13.719045  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.719054  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:13.719066  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:13.719128  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:13.743528  303437 cri.go:89] found id: ""
	I1210 07:10:13.743592  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.743614  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:13.743627  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:13.743700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:13.773695  303437 cri.go:89] found id: ""
	I1210 07:10:13.773720  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.773737  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:13.773743  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:13.773802  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:13.797583  303437 cri.go:89] found id: ""
	I1210 07:10:13.797605  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.797614  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:13.797620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:13.797678  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:13.825318  303437 cri.go:89] found id: ""
	I1210 07:10:13.825348  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.825357  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:13.825363  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:13.825420  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:13.853561  303437 cri.go:89] found id: ""
	I1210 07:10:13.853585  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.853594  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:13.853604  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:13.853622  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:13.935926  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:13.915703    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.920870    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.921425    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923146    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923621    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:13.915703    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.920870    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.921425    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923146    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923621    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:13.935954  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:13.935967  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:13.962598  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:13.962630  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:13.990458  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:13.990484  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:14.047843  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:14.047880  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:16.562478  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:16.576152  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:16.576222  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:16.604031  303437 cri.go:89] found id: ""
	I1210 07:10:16.604054  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.604063  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:16.604069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:16.604128  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:16.628609  303437 cri.go:89] found id: ""
	I1210 07:10:16.628631  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.628640  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:16.628658  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:16.628717  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:16.653619  303437 cri.go:89] found id: ""
	I1210 07:10:16.653656  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.653665  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:16.653671  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:16.653756  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:16.682568  303437 cri.go:89] found id: ""
	I1210 07:10:16.682604  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.682613  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:16.682620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:16.682693  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:16.707801  303437 cri.go:89] found id: ""
	I1210 07:10:16.707835  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.707845  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:16.707852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:16.707935  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:16.732620  303437 cri.go:89] found id: ""
	I1210 07:10:16.732688  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.732711  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:16.732728  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:16.732825  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:16.758445  303437 cri.go:89] found id: ""
	I1210 07:10:16.758467  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.758475  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:16.758482  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:16.758539  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:16.783975  303437 cri.go:89] found id: ""
	I1210 07:10:16.784001  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.784010  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:16.784019  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:16.784047  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:16.814022  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:16.814049  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:16.869237  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:16.869269  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:16.882654  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:16.882731  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:16.969042  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:16.957319    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.958047    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.960851    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.961625    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.964373    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:16.957319    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.958047    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.960851    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.961625    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.964373    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:16.969064  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:16.969086  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:19.496234  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:19.506951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:19.507093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:19.530611  303437 cri.go:89] found id: ""
	I1210 07:10:19.530643  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.530652  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:19.530658  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:19.530727  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:19.557799  303437 cri.go:89] found id: ""
	I1210 07:10:19.557835  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.557845  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:19.557852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:19.557920  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:19.582933  303437 cri.go:89] found id: ""
	I1210 07:10:19.582967  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.582976  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:19.582983  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:19.583072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:19.607826  303437 cri.go:89] found id: ""
	I1210 07:10:19.607889  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.607909  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:19.607917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:19.607979  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:19.632512  303437 cri.go:89] found id: ""
	I1210 07:10:19.632580  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.632597  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:19.632604  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:19.632665  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:19.657636  303437 cri.go:89] found id: ""
	I1210 07:10:19.657668  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.657677  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:19.657684  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:19.657765  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:19.682353  303437 cri.go:89] found id: ""
	I1210 07:10:19.682423  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.682456  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:19.682476  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:19.682562  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:19.706488  303437 cri.go:89] found id: ""
	I1210 07:10:19.706549  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.706582  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:19.706606  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:19.706644  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:19.719694  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:19.719721  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:19.784893  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:19.777331    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.777928    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.779521    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.780075    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.781604    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:19.777331    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.777928    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.779521    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.780075    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.781604    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:19.784915  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:19.784928  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:19.809606  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:19.809641  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:19.841622  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:19.841657  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:22.397071  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:22.407225  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:22.407298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:22.443280  303437 cri.go:89] found id: ""
	I1210 07:10:22.443304  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.443313  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:22.443320  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:22.443377  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:22.476100  303437 cri.go:89] found id: ""
	I1210 07:10:22.476121  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.476130  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:22.476136  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:22.476197  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:22.504294  303437 cri.go:89] found id: ""
	I1210 07:10:22.504317  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.504326  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:22.504332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:22.504388  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:22.527983  303437 cri.go:89] found id: ""
	I1210 07:10:22.528006  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.528015  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:22.528028  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:22.528085  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:22.552219  303437 cri.go:89] found id: ""
	I1210 07:10:22.552243  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.552252  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:22.552257  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:22.552314  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:22.576437  303437 cri.go:89] found id: ""
	I1210 07:10:22.576459  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.576469  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:22.576475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:22.576530  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:22.601577  303437 cri.go:89] found id: ""
	I1210 07:10:22.601599  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.601608  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:22.601614  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:22.601671  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:22.625855  303437 cri.go:89] found id: ""
	I1210 07:10:22.625878  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.625889  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:22.625899  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:22.625910  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:22.681686  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:22.681732  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:22.695126  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:22.695154  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:22.758688  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:22.751337    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.752263    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.753775    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.754238    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.755632    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:22.751337    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.752263    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.753775    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.754238    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.755632    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:22.758709  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:22.758722  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:22.783636  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:22.783671  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:25.311139  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:25.321885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:25.321968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:25.346177  303437 cri.go:89] found id: ""
	I1210 07:10:25.346257  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.346280  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:25.346299  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:25.346402  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:25.371678  303437 cri.go:89] found id: ""
	I1210 07:10:25.371751  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.371766  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:25.371773  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:25.371836  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:25.404393  303437 cri.go:89] found id: ""
	I1210 07:10:25.404419  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.404436  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:25.404450  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:25.404528  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:25.439726  303437 cri.go:89] found id: ""
	I1210 07:10:25.439766  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.439779  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:25.439803  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:25.439965  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:25.476965  303437 cri.go:89] found id: ""
	I1210 07:10:25.476998  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.477007  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:25.477018  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:25.477127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:25.502342  303437 cri.go:89] found id: ""
	I1210 07:10:25.502369  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.502378  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:25.502385  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:25.502451  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:25.528396  303437 cri.go:89] found id: ""
	I1210 07:10:25.528423  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.528432  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:25.528439  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:25.528543  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:25.555005  303437 cri.go:89] found id: ""
	I1210 07:10:25.555065  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.555074  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:25.555083  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:25.555095  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:25.568421  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:25.568450  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:25.629120  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:25.622083    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.622649    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624105    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624522    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.626001    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:25.622083    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.622649    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624105    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624522    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.626001    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:25.629143  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:25.629155  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:25.654736  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:25.654768  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:25.685404  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:25.685473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:28.247164  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:28.257638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:28.257709  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:28.283706  303437 cri.go:89] found id: ""
	I1210 07:10:28.283729  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.283738  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:28.283744  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:28.283806  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:28.311304  303437 cri.go:89] found id: ""
	I1210 07:10:28.311327  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.311336  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:28.311342  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:28.311407  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:28.336026  303437 cri.go:89] found id: ""
	I1210 07:10:28.336048  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.336056  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:28.336062  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:28.336121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:28.361333  303437 cri.go:89] found id: ""
	I1210 07:10:28.361354  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.361362  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:28.361369  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:28.361428  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:28.389101  303437 cri.go:89] found id: ""
	I1210 07:10:28.389123  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.389132  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:28.389138  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:28.389196  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:28.422619  303437 cri.go:89] found id: ""
	I1210 07:10:28.422641  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.422649  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:28.422656  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:28.422713  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:28.453144  303437 cri.go:89] found id: ""
	I1210 07:10:28.453217  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.453240  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:28.453260  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:28.453347  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:28.483124  303437 cri.go:89] found id: ""
	I1210 07:10:28.483148  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.483158  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:28.483167  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:28.483178  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:28.496766  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:28.496793  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:28.563971  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:28.556200    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.557736    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.558089    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559546    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559812    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:28.556200    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.557736    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.558089    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559546    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559812    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:28.564003  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:28.564015  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:28.588981  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:28.589012  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:28.617971  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:28.618000  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:31.175214  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:31.187495  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:31.187568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:31.221446  303437 cri.go:89] found id: ""
	I1210 07:10:31.221473  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.221482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:31.221488  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:31.221548  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:31.246343  303437 cri.go:89] found id: ""
	I1210 07:10:31.246377  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.246386  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:31.246392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:31.246459  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:31.270266  303437 cri.go:89] found id: ""
	I1210 07:10:31.270289  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.270303  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:31.270309  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:31.270365  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:31.295166  303437 cri.go:89] found id: ""
	I1210 07:10:31.295190  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.295199  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:31.295219  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:31.295284  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:31.320783  303437 cri.go:89] found id: ""
	I1210 07:10:31.320822  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.320831  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:31.320838  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:31.320902  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:31.344885  303437 cri.go:89] found id: ""
	I1210 07:10:31.344910  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.344919  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:31.344927  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:31.344984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:31.369604  303437 cri.go:89] found id: ""
	I1210 07:10:31.369627  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.369636  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:31.369642  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:31.369700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:31.396633  303437 cri.go:89] found id: ""
	I1210 07:10:31.396654  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.396663  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:31.396672  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:31.396685  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:31.458644  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:31.458678  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:31.474603  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:31.474632  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:31.540901  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:31.533490    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.534340    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.535925    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.536236    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.537709    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:31.533490    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.534340    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.535925    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.536236    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.537709    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:31.540921  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:31.540933  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:31.565730  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:31.565763  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:34.098229  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:34.108967  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:34.109037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:34.137131  303437 cri.go:89] found id: ""
	I1210 07:10:34.137153  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.137162  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:34.137168  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:34.137224  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:34.171468  303437 cri.go:89] found id: ""
	I1210 07:10:34.171489  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.171498  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:34.171504  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:34.171565  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:34.199509  303437 cri.go:89] found id: ""
	I1210 07:10:34.199531  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.199539  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:34.199545  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:34.199603  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:34.230270  303437 cri.go:89] found id: ""
	I1210 07:10:34.230292  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.230301  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:34.230308  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:34.230368  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:34.257508  303437 cri.go:89] found id: ""
	I1210 07:10:34.257529  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.257538  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:34.257544  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:34.257598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:34.285487  303437 cri.go:89] found id: ""
	I1210 07:10:34.285509  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.285517  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:34.285524  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:34.285584  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:34.312438  303437 cri.go:89] found id: ""
	I1210 07:10:34.312460  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.312469  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:34.312475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:34.312535  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:34.336063  303437 cri.go:89] found id: ""
	I1210 07:10:34.336137  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.336152  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:34.336161  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:34.336172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:34.392136  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:34.392168  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:34.405661  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:34.405691  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:34.486073  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:34.478058    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.478642    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480446    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480885    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.482506    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:34.478058    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.478642    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480446    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480885    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.482506    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:34.486096  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:34.486110  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:34.512711  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:34.512745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:37.043733  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:37.054272  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:37.054343  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:37.080616  303437 cri.go:89] found id: ""
	I1210 07:10:37.080640  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.080649  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:37.080656  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:37.080716  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:37.104975  303437 cri.go:89] found id: ""
	I1210 07:10:37.105002  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.105010  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:37.105017  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:37.105077  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:37.128929  303437 cri.go:89] found id: ""
	I1210 07:10:37.128952  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.128960  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:37.128966  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:37.129026  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:37.154538  303437 cri.go:89] found id: ""
	I1210 07:10:37.154561  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.154570  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:37.154577  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:37.154637  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:37.183900  303437 cri.go:89] found id: ""
	I1210 07:10:37.183920  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.183928  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:37.183934  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:37.183994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:37.218659  303437 cri.go:89] found id: ""
	I1210 07:10:37.218681  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.218689  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:37.218696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:37.218758  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:37.243786  303437 cri.go:89] found id: ""
	I1210 07:10:37.243808  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.243817  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:37.243824  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:37.243889  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:37.271822  303437 cri.go:89] found id: ""
	I1210 07:10:37.271847  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.271856  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:37.271865  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:37.271877  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:37.327230  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:37.327261  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:37.340728  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:37.340755  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:37.402472  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:37.395392    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.395764    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397322    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397745    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.399404    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:37.395392    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.395764    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397322    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397745    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.399404    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:37.402534  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:37.402560  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:37.428514  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:37.428587  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:39.957676  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:39.968353  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:39.968422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:39.996461  303437 cri.go:89] found id: ""
	I1210 07:10:39.996487  303437 logs.go:282] 0 containers: []
	W1210 07:10:39.996497  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:39.996504  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:39.996572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:40.052529  303437 cri.go:89] found id: ""
	I1210 07:10:40.052553  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.052563  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:40.052570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:40.052635  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:40.083247  303437 cri.go:89] found id: ""
	I1210 07:10:40.083272  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.083282  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:40.083288  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:40.083349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:40.109171  303437 cri.go:89] found id: ""
	I1210 07:10:40.109195  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.109204  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:40.109211  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:40.109271  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:40.138871  303437 cri.go:89] found id: ""
	I1210 07:10:40.138950  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.138972  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:40.138992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:40.139100  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:40.176299  303437 cri.go:89] found id: ""
	I1210 07:10:40.176335  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.176345  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:40.176352  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:40.176448  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:40.213557  303437 cri.go:89] found id: ""
	I1210 07:10:40.213590  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.213600  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:40.213622  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:40.213706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:40.253605  303437 cri.go:89] found id: ""
	I1210 07:10:40.253639  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.253648  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:40.253658  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:40.253670  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:40.289048  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:40.289076  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:40.348311  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:40.348344  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:40.364207  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:40.364249  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:40.431287  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:40.422606    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.423275    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.424961    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.425595    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.427272    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:40.422606    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.423275    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.424961    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.425595    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.427272    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:40.431309  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:40.431325  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:42.962817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:42.973583  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:42.973714  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:43.004181  303437 cri.go:89] found id: ""
	I1210 07:10:43.004211  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.004222  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:43.004235  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:43.004302  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:43.031231  303437 cri.go:89] found id: ""
	I1210 07:10:43.031252  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.031261  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:43.031267  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:43.031324  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:43.056959  303437 cri.go:89] found id: ""
	I1210 07:10:43.056991  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.057002  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:43.057009  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:43.057072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:43.086361  303437 cri.go:89] found id: ""
	I1210 07:10:43.086393  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.086403  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:43.086413  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:43.086481  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:43.112977  303437 cri.go:89] found id: ""
	I1210 07:10:43.113003  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.113013  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:43.113020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:43.113079  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:43.137716  303437 cri.go:89] found id: ""
	I1210 07:10:43.137740  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.137749  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:43.137755  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:43.137814  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:43.173396  303437 cri.go:89] found id: ""
	I1210 07:10:43.173421  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.173431  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:43.173437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:43.173494  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:43.202828  303437 cri.go:89] found id: ""
	I1210 07:10:43.202852  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.202861  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:43.202871  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:43.202885  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:43.265997  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:43.266036  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:43.281547  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:43.281582  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:43.359532  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:43.352125   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.352633   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354207   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354531   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.356009   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:43.352125   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.352633   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354207   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354531   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.356009   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:43.359554  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:43.359567  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:43.392377  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:43.392433  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:45.942739  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:45.955296  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:45.955374  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:45.984462  303437 cri.go:89] found id: ""
	I1210 07:10:45.984488  303437 logs.go:282] 0 containers: []
	W1210 07:10:45.984497  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:45.984507  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:45.984566  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:46.014873  303437 cri.go:89] found id: ""
	I1210 07:10:46.014898  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.014920  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:46.014928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:46.015038  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:46.044539  303437 cri.go:89] found id: ""
	I1210 07:10:46.044565  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.044574  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:46.044581  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:46.044642  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:46.070950  303437 cri.go:89] found id: ""
	I1210 07:10:46.070975  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.070985  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:46.070992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:46.071091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:46.101134  303437 cri.go:89] found id: ""
	I1210 07:10:46.101160  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.101170  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:46.101176  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:46.101255  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:46.126003  303437 cri.go:89] found id: ""
	I1210 07:10:46.126028  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.126037  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:46.126044  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:46.126103  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:46.152209  303437 cri.go:89] found id: ""
	I1210 07:10:46.152231  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.152239  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:46.152245  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:46.152303  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:46.183764  303437 cri.go:89] found id: ""
	I1210 07:10:46.183786  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.183794  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:46.183803  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:46.183813  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:46.248135  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:46.248173  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:46.262749  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:46.262778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:46.330280  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:46.322629   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.323199   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.324997   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.325371   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.326892   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:46.322629   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.323199   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.324997   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.325371   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.326892   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:46.330302  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:46.330315  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:46.356151  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:46.356184  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:48.884130  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:48.894898  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:48.894989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:48.919239  303437 cri.go:89] found id: ""
	I1210 07:10:48.919266  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.919275  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:48.919282  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:48.919343  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:48.946463  303437 cri.go:89] found id: ""
	I1210 07:10:48.946487  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.946497  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:48.946509  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:48.946569  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:48.971661  303437 cri.go:89] found id: ""
	I1210 07:10:48.971735  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.971757  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:48.971772  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:48.971857  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:48.996435  303437 cri.go:89] found id: ""
	I1210 07:10:48.996457  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.996466  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:48.996472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:48.996539  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:49.023269  303437 cri.go:89] found id: ""
	I1210 07:10:49.023296  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.023305  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:49.023311  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:49.023371  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:49.052018  303437 cri.go:89] found id: ""
	I1210 07:10:49.052042  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.052051  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:49.052058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:49.052125  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:49.076866  303437 cri.go:89] found id: ""
	I1210 07:10:49.076929  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.076943  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:49.076951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:49.077009  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:49.105029  303437 cri.go:89] found id: ""
	I1210 07:10:49.105051  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.105061  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:49.105070  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:49.105081  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:49.161025  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:49.161103  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:49.176997  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:49.177065  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:49.246287  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:49.238608   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.239236   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.240909   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.241468   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.243158   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:49.238608   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.239236   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.240909   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.241468   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.243158   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:49.246359  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:49.246386  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:49.271827  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:49.271865  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:51.801611  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:51.812172  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:51.812240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:51.836841  303437 cri.go:89] found id: ""
	I1210 07:10:51.836864  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.836874  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:51.836880  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:51.836942  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:51.860730  303437 cri.go:89] found id: ""
	I1210 07:10:51.860754  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.860764  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:51.860770  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:51.860831  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:51.885358  303437 cri.go:89] found id: ""
	I1210 07:10:51.885379  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.885388  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:51.885394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:51.885452  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:51.909974  303437 cri.go:89] found id: ""
	I1210 07:10:51.910038  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.910062  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:51.910080  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:51.910152  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:51.938488  303437 cri.go:89] found id: ""
	I1210 07:10:51.938553  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.938577  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:51.938596  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:51.938669  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:51.964789  303437 cri.go:89] found id: ""
	I1210 07:10:51.964821  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.964831  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:51.964837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:51.964914  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:51.988457  303437 cri.go:89] found id: ""
	I1210 07:10:51.988478  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.988487  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:51.988493  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:51.988553  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:52.032140  303437 cri.go:89] found id: ""
	I1210 07:10:52.032164  303437 logs.go:282] 0 containers: []
	W1210 07:10:52.032177  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:52.032187  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:52.032198  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:52.058273  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:52.058311  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:52.089897  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:52.089924  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:52.145350  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:52.145387  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:52.162441  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:52.162475  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:52.244944  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:52.233560   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.234752   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.239562   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.240120   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.241681   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:52.233560   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.234752   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.239562   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.240120   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.241681   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:54.746617  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:54.757597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:54.757677  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:54.785180  303437 cri.go:89] found id: ""
	I1210 07:10:54.785205  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.785215  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:54.785222  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:54.785283  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:54.813159  303437 cri.go:89] found id: ""
	I1210 07:10:54.813184  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.813193  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:54.813200  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:54.813258  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:54.840481  303437 cri.go:89] found id: ""
	I1210 07:10:54.840503  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.840512  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:54.840519  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:54.840578  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:54.869478  303437 cri.go:89] found id: ""
	I1210 07:10:54.869500  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.869509  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:54.869516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:54.869573  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:54.892998  303437 cri.go:89] found id: ""
	I1210 07:10:54.893020  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.893028  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:54.893034  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:54.893093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:54.921729  303437 cri.go:89] found id: ""
	I1210 07:10:54.921755  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.921765  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:54.921772  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:54.921838  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:54.946951  303437 cri.go:89] found id: ""
	I1210 07:10:54.946976  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.946985  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:54.946992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:54.947069  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:54.972444  303437 cri.go:89] found id: ""
	I1210 07:10:54.972466  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.972475  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:54.972484  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:54.972502  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:54.997696  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:54.997743  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:55.038495  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:55.038532  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:55.099784  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:55.099825  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:55.115531  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:55.115561  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:55.193319  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:55.183569   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.187128   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188202   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188540   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.190127   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:55.183569   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.187128   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188202   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188540   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.190127   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:57.693558  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:57.704587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:57.704698  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:57.733113  303437 cri.go:89] found id: ""
	I1210 07:10:57.733137  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.733147  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:57.733154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:57.733217  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:57.759697  303437 cri.go:89] found id: ""
	I1210 07:10:57.759721  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.759730  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:57.759736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:57.759813  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:57.785244  303437 cri.go:89] found id: ""
	I1210 07:10:57.785273  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.785282  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:57.785288  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:57.785349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:57.819299  303437 cri.go:89] found id: ""
	I1210 07:10:57.819324  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.819333  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:57.819339  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:57.819397  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:57.843698  303437 cri.go:89] found id: ""
	I1210 07:10:57.843720  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.843729  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:57.843736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:57.843797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:57.867903  303437 cri.go:89] found id: ""
	I1210 07:10:57.867928  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.867938  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:57.867944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:57.868003  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:57.892038  303437 cri.go:89] found id: ""
	I1210 07:10:57.892065  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.892074  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:57.892080  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:57.892144  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:57.917032  303437 cri.go:89] found id: ""
	I1210 07:10:57.917055  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.917064  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:57.917073  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:57.917084  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:57.972772  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:57.972808  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:57.986446  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:57.986475  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:58.053540  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:58.045445   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.046514   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.047201   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.048794   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.049108   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:58.045445   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.046514   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.047201   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.048794   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.049108   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:58.053559  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:58.053572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:58.078999  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:58.079080  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:00.609346  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:00.620922  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:00.620998  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:00.647744  303437 cri.go:89] found id: ""
	I1210 07:11:00.647766  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.647775  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:00.647781  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:00.647838  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:00.685141  303437 cri.go:89] found id: ""
	I1210 07:11:00.685162  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.685171  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:00.685177  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:00.685237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:00.713949  303437 cri.go:89] found id: ""
	I1210 07:11:00.713971  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.713980  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:00.713986  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:00.714045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:00.740428  303437 cri.go:89] found id: ""
	I1210 07:11:00.740453  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.740463  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:00.740471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:00.740531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:00.765430  303437 cri.go:89] found id: ""
	I1210 07:11:00.765455  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.765464  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:00.765471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:00.765529  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:00.790771  303437 cri.go:89] found id: ""
	I1210 07:11:00.790797  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.790806  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:00.790813  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:00.790871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:00.817430  303437 cri.go:89] found id: ""
	I1210 07:11:00.817456  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.817465  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:00.817471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:00.817531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:00.841761  303437 cri.go:89] found id: ""
	I1210 07:11:00.841785  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.841794  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:00.841803  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:00.841817  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:00.855324  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:00.855351  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:00.926358  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:00.918056   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.918855   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.920644   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.921186   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.922891   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:00.918056   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.918855   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.920644   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.921186   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.922891   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:00.926380  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:00.926394  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:00.951644  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:00.951678  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:00.979845  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:00.979875  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:03.540927  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:03.551392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:03.551462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:03.576792  303437 cri.go:89] found id: ""
	I1210 07:11:03.576821  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.576830  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:03.576837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:03.576896  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:03.601193  303437 cri.go:89] found id: ""
	I1210 07:11:03.601216  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.601225  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:03.601233  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:03.601290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:03.626528  303437 cri.go:89] found id: ""
	I1210 07:11:03.626550  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.626559  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:03.626565  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:03.626624  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:03.656106  303437 cri.go:89] found id: ""
	I1210 07:11:03.656128  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.656137  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:03.656149  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:03.656206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:03.691936  303437 cri.go:89] found id: ""
	I1210 07:11:03.691960  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.691970  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:03.691976  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:03.692037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:03.721295  303437 cri.go:89] found id: ""
	I1210 07:11:03.721321  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.721331  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:03.721338  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:03.721409  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:03.750080  303437 cri.go:89] found id: ""
	I1210 07:11:03.750105  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.750114  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:03.750121  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:03.750205  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:03.777748  303437 cri.go:89] found id: ""
	I1210 07:11:03.777771  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.777780  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:03.777815  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:03.777836  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:03.792128  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:03.792159  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:03.859337  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:03.851981   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.852547   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854609   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.856112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:03.851981   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.852547   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854609   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.856112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:03.859358  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:03.859371  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:03.885445  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:03.885482  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:03.915897  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:03.915925  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:06.473632  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:06.484351  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:06.484431  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:06.509957  303437 cri.go:89] found id: ""
	I1210 07:11:06.509982  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.509991  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:06.509997  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:06.510061  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:06.537150  303437 cri.go:89] found id: ""
	I1210 07:11:06.537175  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.537185  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:06.537195  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:06.537255  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:06.571765  303437 cri.go:89] found id: ""
	I1210 07:11:06.571789  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.571798  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:06.571804  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:06.571872  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:06.600905  303437 cri.go:89] found id: ""
	I1210 07:11:06.600928  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.600938  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:06.600944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:06.601007  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:06.625296  303437 cri.go:89] found id: ""
	I1210 07:11:06.625320  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.625329  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:06.625335  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:06.625396  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:06.653467  303437 cri.go:89] found id: ""
	I1210 07:11:06.653490  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.653499  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:06.653505  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:06.653563  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:06.693284  303437 cri.go:89] found id: ""
	I1210 07:11:06.693309  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.693319  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:06.693325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:06.693385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:06.731038  303437 cri.go:89] found id: ""
	I1210 07:11:06.731061  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.731069  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:06.731079  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:06.731091  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:06.744632  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:06.744661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:06.805649  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:06.797284   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.798135   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.799790   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.800106   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.801600   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:06.797284   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.798135   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.799790   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.800106   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.801600   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:06.805675  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:06.805697  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:06.830881  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:06.830917  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:06.859403  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:06.859429  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:09.415956  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:09.428117  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:09.428237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:09.457364  303437 cri.go:89] found id: ""
	I1210 07:11:09.457426  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.457457  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:09.457478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:09.457570  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:09.487281  303437 cri.go:89] found id: ""
	I1210 07:11:09.487343  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.487375  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:09.487395  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:09.487481  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:09.512841  303437 cri.go:89] found id: ""
	I1210 07:11:09.512912  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.512945  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:09.512964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:09.513056  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:09.538740  303437 cri.go:89] found id: ""
	I1210 07:11:09.538824  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.538855  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:09.538885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:09.538979  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:09.566651  303437 cri.go:89] found id: ""
	I1210 07:11:09.566692  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.566718  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:09.566732  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:09.566811  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:09.591707  303437 cri.go:89] found id: ""
	I1210 07:11:09.591782  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.591798  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:09.591808  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:09.591866  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:09.620542  303437 cri.go:89] found id: ""
	I1210 07:11:09.620568  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.620577  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:09.620584  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:09.620642  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:09.649059  303437 cri.go:89] found id: ""
	I1210 07:11:09.649082  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.649091  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:09.649100  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:09.649111  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:09.674480  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:09.674512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:09.715383  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:09.715410  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:09.775480  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:09.775512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:09.788719  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:09.788798  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:09.855981  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:09.848870   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.849294   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.850550   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.851138   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.852720   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:09.848870   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.849294   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.850550   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.851138   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.852720   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:12.356259  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:12.366697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:12.366763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:12.390732  303437 cri.go:89] found id: ""
	I1210 07:11:12.390756  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.390764  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:12.390771  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:12.390826  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:12.430569  303437 cri.go:89] found id: ""
	I1210 07:11:12.430619  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.430631  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:12.430638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:12.430704  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:12.477376  303437 cri.go:89] found id: ""
	I1210 07:11:12.477398  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.477406  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:12.477412  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:12.477483  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:12.503110  303437 cri.go:89] found id: ""
	I1210 07:11:12.503132  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.503140  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:12.503147  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:12.503206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:12.527661  303437 cri.go:89] found id: ""
	I1210 07:11:12.527683  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.527691  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:12.527698  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:12.527757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:12.552603  303437 cri.go:89] found id: ""
	I1210 07:11:12.552624  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.552632  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:12.552639  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:12.552701  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:12.576969  303437 cri.go:89] found id: ""
	I1210 07:11:12.576991  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.576999  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:12.577005  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:12.577074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:12.602537  303437 cri.go:89] found id: ""
	I1210 07:11:12.602559  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.602568  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:12.602577  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:12.602589  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:12.660382  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:12.660462  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:12.675575  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:12.675600  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:12.748937  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:12.741330   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.741988   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.743656   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.744158   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.745748   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:12.741330   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.741988   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.743656   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.744158   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.745748   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:12.748957  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:12.748970  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:12.773717  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:12.773752  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:15.305384  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:15.315713  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:15.315783  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:15.340655  303437 cri.go:89] found id: ""
	I1210 07:11:15.340678  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.340687  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:15.340693  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:15.340757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:15.366091  303437 cri.go:89] found id: ""
	I1210 07:11:15.366115  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.366123  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:15.366130  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:15.366187  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:15.392837  303437 cri.go:89] found id: ""
	I1210 07:11:15.392862  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.392871  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:15.392877  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:15.392939  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:15.435313  303437 cri.go:89] found id: ""
	I1210 07:11:15.435340  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.435349  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:15.435356  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:15.435422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:15.466475  303437 cri.go:89] found id: ""
	I1210 07:11:15.466500  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.466509  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:15.466516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:15.466575  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:15.497149  303437 cri.go:89] found id: ""
	I1210 07:11:15.497175  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.497184  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:15.497191  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:15.497250  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:15.523660  303437 cri.go:89] found id: ""
	I1210 07:11:15.523725  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.523741  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:15.523748  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:15.523808  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:15.547943  303437 cri.go:89] found id: ""
	I1210 07:11:15.547971  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.547987  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:15.547996  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:15.548007  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:15.603029  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:15.603064  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:15.616115  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:15.616150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:15.696616  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:15.686858   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.687579   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689227   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689725   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.693083   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:15.686858   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.687579   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689227   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689725   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.693083   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:15.696637  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:15.696660  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:15.728162  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:15.728212  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:18.262884  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:18.273396  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:18.273467  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:18.298776  303437 cri.go:89] found id: ""
	I1210 07:11:18.298799  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.298809  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:18.298816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:18.298873  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:18.326358  303437 cri.go:89] found id: ""
	I1210 07:11:18.326431  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.326444  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:18.326472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:18.326567  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:18.351094  303437 cri.go:89] found id: ""
	I1210 07:11:18.351116  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.351125  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:18.351132  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:18.351190  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:18.376189  303437 cri.go:89] found id: ""
	I1210 07:11:18.376211  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.376220  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:18.376227  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:18.376283  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:18.400127  303437 cri.go:89] found id: ""
	I1210 07:11:18.400151  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.400160  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:18.400166  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:18.400231  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:18.429089  303437 cri.go:89] found id: ""
	I1210 07:11:18.429160  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.429173  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:18.429181  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:18.429304  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:18.462081  303437 cri.go:89] found id: ""
	I1210 07:11:18.462162  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.462174  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:18.462202  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:18.462289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:18.490007  303437 cri.go:89] found id: ""
	I1210 07:11:18.490081  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.490105  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:18.490128  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:18.490164  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:18.506325  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:18.506400  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:18.582081  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:18.572894   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.573949   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.574774   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.576605   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.577188   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:18.572894   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.573949   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.574774   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.576605   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.577188   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:18.582154  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:18.582194  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:18.608014  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:18.608047  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:18.637797  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:18.637826  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:21.198374  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:21.208690  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:21.208757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:21.235678  303437 cri.go:89] found id: ""
	I1210 07:11:21.235701  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.235710  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:21.235723  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:21.235788  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:21.259648  303437 cri.go:89] found id: ""
	I1210 07:11:21.259671  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.259679  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:21.259685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:21.259742  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:21.284541  303437 cri.go:89] found id: ""
	I1210 07:11:21.284562  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.284571  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:21.284577  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:21.284634  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:21.309347  303437 cri.go:89] found id: ""
	I1210 07:11:21.309371  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.309380  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:21.309386  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:21.309449  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:21.337308  303437 cri.go:89] found id: ""
	I1210 07:11:21.337377  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.337397  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:21.337414  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:21.337498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:21.362600  303437 cri.go:89] found id: ""
	I1210 07:11:21.362622  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.362631  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:21.362637  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:21.362706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:21.386909  303437 cri.go:89] found id: ""
	I1210 07:11:21.386934  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.386951  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:21.386959  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:21.387045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:21.444294  303437 cri.go:89] found id: ""
	I1210 07:11:21.444331  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.444340  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:21.444350  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:21.444361  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:21.537630  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:21.526461   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.527437   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.531792   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.532470   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.534191   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:21.526461   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.527437   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.531792   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.532470   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.534191   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:21.537650  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:21.537744  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:21.567303  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:21.567339  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:21.599305  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:21.599333  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:21.660956  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:21.660989  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:24.197663  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:24.209532  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:24.209604  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:24.235185  303437 cri.go:89] found id: ""
	I1210 07:11:24.235207  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.235215  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:24.235222  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:24.235291  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:24.269486  303437 cri.go:89] found id: ""
	I1210 07:11:24.269507  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.269515  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:24.269522  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:24.269580  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:24.295987  303437 cri.go:89] found id: ""
	I1210 07:11:24.296010  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.296018  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:24.296024  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:24.296080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:24.321843  303437 cri.go:89] found id: ""
	I1210 07:11:24.321918  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.321932  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:24.321939  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:24.322070  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:24.349226  303437 cri.go:89] found id: ""
	I1210 07:11:24.349296  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.349309  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:24.349316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:24.349439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:24.382513  303437 cri.go:89] found id: ""
	I1210 07:11:24.382595  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.382617  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:24.382636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:24.382759  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:24.423211  303437 cri.go:89] found id: ""
	I1210 07:11:24.423284  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.423306  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:24.423325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:24.423413  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:24.483751  303437 cri.go:89] found id: ""
	I1210 07:11:24.483774  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.483783  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:24.483792  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:24.483831  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:24.554712  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:24.547245   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.548134   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.549755   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.550089   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.551593   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:24.547245   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.548134   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.549755   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.550089   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.551593   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:24.554746  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:24.554759  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:24.583135  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:24.583172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:24.621794  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:24.621824  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:24.686891  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:24.686927  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:27.212817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:27.223470  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:27.223540  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:27.250394  303437 cri.go:89] found id: ""
	I1210 07:11:27.250421  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.250431  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:27.250437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:27.250497  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:27.275076  303437 cri.go:89] found id: ""
	I1210 07:11:27.275099  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.275108  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:27.275114  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:27.275175  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:27.300285  303437 cri.go:89] found id: ""
	I1210 07:11:27.300311  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.300321  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:27.300327  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:27.300389  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:27.324870  303437 cri.go:89] found id: ""
	I1210 07:11:27.324894  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.324904  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:27.324910  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:27.324976  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:27.351041  303437 cri.go:89] found id: ""
	I1210 07:11:27.351063  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.351072  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:27.351079  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:27.351145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:27.375920  303437 cri.go:89] found id: ""
	I1210 07:11:27.375942  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.375950  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:27.375957  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:27.376016  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:27.400149  303437 cri.go:89] found id: ""
	I1210 07:11:27.400174  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.400183  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:27.400190  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:27.400248  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:27.436160  303437 cri.go:89] found id: ""
	I1210 07:11:27.436192  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.436201  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:27.436211  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:27.436222  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:27.498671  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:27.498704  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:27.512854  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:27.512880  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:27.582038  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:27.573895   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.574782   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576306   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576889   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.578645   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:27.573895   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.574782   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576306   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576889   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.578645   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:27.582102  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:27.582129  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:27.610246  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:27.610287  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:30.139493  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:30.150290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:30.150358  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:30.176970  303437 cri.go:89] found id: ""
	I1210 07:11:30.177000  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.177008  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:30.177015  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:30.177079  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:30.202200  303437 cri.go:89] found id: ""
	I1210 07:11:30.202226  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.202235  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:30.202241  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:30.202300  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:30.226724  303437 cri.go:89] found id: ""
	I1210 07:11:30.226748  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.226757  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:30.226763  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:30.226825  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:30.251813  303437 cri.go:89] found id: ""
	I1210 07:11:30.251835  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.251844  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:30.251850  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:30.251912  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:30.277078  303437 cri.go:89] found id: ""
	I1210 07:11:30.277099  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.277109  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:30.277115  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:30.277172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:30.305998  303437 cri.go:89] found id: ""
	I1210 07:11:30.306019  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.306027  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:30.306034  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:30.306091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:30.334810  303437 cri.go:89] found id: ""
	I1210 07:11:30.334831  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.334839  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:30.334846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:30.334903  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:30.359892  303437 cri.go:89] found id: ""
	I1210 07:11:30.359913  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.359921  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:30.359930  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:30.359940  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:30.385054  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:30.385088  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:30.421360  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:30.421390  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:30.485019  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:30.485051  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:30.498844  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:30.498916  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:30.560538  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:30.552697   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.553538   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.554465   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.555942   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.556285   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:30.552697   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.553538   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.554465   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.555942   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.556285   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:33.062385  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:33.073083  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:33.073165  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:33.097439  303437 cri.go:89] found id: ""
	I1210 07:11:33.097463  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.097471  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:33.097478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:33.097540  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:33.124732  303437 cri.go:89] found id: ""
	I1210 07:11:33.124754  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.124763  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:33.124769  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:33.124829  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:33.153513  303437 cri.go:89] found id: ""
	I1210 07:11:33.153536  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.153545  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:33.153550  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:33.153610  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:33.179491  303437 cri.go:89] found id: ""
	I1210 07:11:33.179518  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.179526  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:33.179533  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:33.179593  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:33.205039  303437 cri.go:89] found id: ""
	I1210 07:11:33.205232  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.205248  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:33.205255  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:33.205332  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:33.231637  303437 cri.go:89] found id: ""
	I1210 07:11:33.231661  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.231670  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:33.231677  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:33.231740  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:33.257596  303437 cri.go:89] found id: ""
	I1210 07:11:33.257622  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.257630  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:33.257636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:33.257702  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:33.283943  303437 cri.go:89] found id: ""
	I1210 07:11:33.283968  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.283978  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:33.283989  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:33.284003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:33.297130  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:33.297162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:33.358971  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:33.351484   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.351972   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.353499   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.354087   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.355603   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:33.351484   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.351972   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.353499   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.354087   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.355603   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:33.359004  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:33.359053  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:33.383559  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:33.383593  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:33.411160  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:33.411184  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:35.975172  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:35.985598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:35.985677  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:36.012649  303437 cri.go:89] found id: ""
	I1210 07:11:36.012687  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.012698  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:36.012705  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:36.012772  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:36.039233  303437 cri.go:89] found id: ""
	I1210 07:11:36.039301  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.039325  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:36.039344  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:36.039440  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:36.064743  303437 cri.go:89] found id: ""
	I1210 07:11:36.064766  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.064775  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:36.064781  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:36.064839  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:36.088939  303437 cri.go:89] found id: ""
	I1210 07:11:36.088961  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.088969  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:36.088975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:36.089037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:36.116797  303437 cri.go:89] found id: ""
	I1210 07:11:36.116821  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.116830  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:36.116836  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:36.116894  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:36.141419  303437 cri.go:89] found id: ""
	I1210 07:11:36.141447  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.141456  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:36.141463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:36.141525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:36.166138  303437 cri.go:89] found id: ""
	I1210 07:11:36.166165  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.166174  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:36.166180  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:36.166242  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:36.193939  303437 cri.go:89] found id: ""
	I1210 07:11:36.194014  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.194036  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:36.194058  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:36.194096  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:36.250476  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:36.250507  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:36.263989  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:36.264070  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:36.328452  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:36.320900   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.321474   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323175   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323732   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.325316   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:36.320900   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.321474   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323175   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323732   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.325316   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:36.328474  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:36.328487  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:36.353490  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:36.353523  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:38.890866  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:38.901365  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:38.901464  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:38.932423  303437 cri.go:89] found id: ""
	I1210 07:11:38.932450  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.932458  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:38.932465  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:38.932525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:38.959879  303437 cri.go:89] found id: ""
	I1210 07:11:38.959907  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.959915  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:38.959921  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:38.959978  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:38.986312  303437 cri.go:89] found id: ""
	I1210 07:11:38.986338  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.986347  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:38.986353  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:38.986410  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:39.011808  303437 cri.go:89] found id: ""
	I1210 07:11:39.011830  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.011839  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:39.011845  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:39.011908  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:39.037634  303437 cri.go:89] found id: ""
	I1210 07:11:39.037675  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.037685  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:39.037691  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:39.037763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:39.062989  303437 cri.go:89] found id: ""
	I1210 07:11:39.063073  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.063096  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:39.063114  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:39.063200  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:39.092710  303437 cri.go:89] found id: ""
	I1210 07:11:39.092732  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.092740  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:39.092749  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:39.092809  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:39.116692  303437 cri.go:89] found id: ""
	I1210 07:11:39.116715  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.116724  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:39.116735  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:39.116745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:39.173134  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:39.173165  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:39.187543  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:39.187619  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:39.248942  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:39.241567   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.242335   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.243953   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.244270   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.245769   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:39.241567   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.242335   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.243953   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.244270   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.245769   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:39.248964  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:39.248976  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:39.273536  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:39.273572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:41.801091  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:41.812394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:41.812473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:41.838936  303437 cri.go:89] found id: ""
	I1210 07:11:41.839028  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.839042  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:41.839050  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:41.839131  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:41.864566  303437 cri.go:89] found id: ""
	I1210 07:11:41.864593  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.864603  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:41.864609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:41.864673  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:41.889296  303437 cri.go:89] found id: ""
	I1210 07:11:41.889321  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.889330  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:41.889337  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:41.889396  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:41.915562  303437 cri.go:89] found id: ""
	I1210 07:11:41.915589  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.915601  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:41.915608  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:41.915670  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:41.953369  303437 cri.go:89] found id: ""
	I1210 07:11:41.953395  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.953404  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:41.953410  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:41.953473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:41.985179  303437 cri.go:89] found id: ""
	I1210 07:11:41.985205  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.985216  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:41.985223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:41.985327  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:42.015327  303437 cri.go:89] found id: ""
	I1210 07:11:42.015400  303437 logs.go:282] 0 containers: []
	W1210 07:11:42.015424  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:42.015443  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:42.015541  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:42.043382  303437 cri.go:89] found id: ""
	I1210 07:11:42.043407  303437 logs.go:282] 0 containers: []
	W1210 07:11:42.043421  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:42.043431  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:42.043443  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:42.080163  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:42.080196  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:42.139896  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:42.139935  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:42.156701  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:42.156737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:42.234579  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:42.225807   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.226427   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228068   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228448   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.229958   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:42.225807   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.226427   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228068   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228448   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.229958   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:42.234662  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:42.234691  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:44.763362  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:44.773978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:44.774048  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:44.799637  303437 cri.go:89] found id: ""
	I1210 07:11:44.799665  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.799674  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:44.799680  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:44.799741  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:44.827772  303437 cri.go:89] found id: ""
	I1210 07:11:44.827797  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.827806  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:44.827812  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:44.827871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:44.851977  303437 cri.go:89] found id: ""
	I1210 07:11:44.852005  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.852014  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:44.852020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:44.852080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:44.876554  303437 cri.go:89] found id: ""
	I1210 07:11:44.876580  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.876590  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:44.876596  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:44.876658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:44.903100  303437 cri.go:89] found id: ""
	I1210 07:11:44.903132  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.903141  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:44.903154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:44.903215  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:44.933312  303437 cri.go:89] found id: ""
	I1210 07:11:44.933333  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.933342  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:44.933348  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:44.933407  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:44.969458  303437 cri.go:89] found id: ""
	I1210 07:11:44.969530  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.969552  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:44.969569  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:44.969666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:45.013288  303437 cri.go:89] found id: ""
	I1210 07:11:45.013381  303437 logs.go:282] 0 containers: []
	W1210 07:11:45.013403  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:45.013427  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:45.013468  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:45.111594  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:45.112597  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:45.131602  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:45.131636  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:45.220807  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:45.205512   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.206557   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.208854   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.209321   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.215241   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:45.205512   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.206557   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.208854   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.209321   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.215241   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:45.220830  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:45.220843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:45.257708  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:45.257752  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:47.792395  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:47.802865  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:47.802937  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:47.832152  303437 cri.go:89] found id: ""
	I1210 07:11:47.832175  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.832191  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:47.832198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:47.832262  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:47.856843  303437 cri.go:89] found id: ""
	I1210 07:11:47.856868  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.856877  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:47.856883  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:47.856943  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:47.880564  303437 cri.go:89] found id: ""
	I1210 07:11:47.880586  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.880595  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:47.880601  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:47.880658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:47.908243  303437 cri.go:89] found id: ""
	I1210 07:11:47.908264  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.908273  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:47.908280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:47.908337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:47.951940  303437 cri.go:89] found id: ""
	I1210 07:11:47.951961  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.951969  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:47.951975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:47.952033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:47.986418  303437 cri.go:89] found id: ""
	I1210 07:11:47.986437  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.986446  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:47.986452  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:47.986511  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:48.018032  303437 cri.go:89] found id: ""
	I1210 07:11:48.018055  303437 logs.go:282] 0 containers: []
	W1210 07:11:48.018064  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:48.018069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:48.018131  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:48.045010  303437 cri.go:89] found id: ""
	I1210 07:11:48.045033  303437 logs.go:282] 0 containers: []
	W1210 07:11:48.045043  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:48.045052  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:48.045063  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:48.070773  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:48.070806  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:48.100419  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:48.100451  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:48.157253  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:48.157287  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:48.171891  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:48.171922  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:48.236843  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:48.228588   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.229356   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231115   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231743   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.233451   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:48.228588   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.229356   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231115   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231743   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.233451   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:50.738489  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:50.749165  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:50.749232  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:50.774993  303437 cri.go:89] found id: ""
	I1210 07:11:50.775032  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.775042  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:50.775049  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:50.775108  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:50.800355  303437 cri.go:89] found id: ""
	I1210 07:11:50.800380  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.800389  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:50.800396  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:50.800455  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:50.825116  303437 cri.go:89] found id: ""
	I1210 07:11:50.825139  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.825148  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:50.825154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:50.825216  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:50.852419  303437 cri.go:89] found id: ""
	I1210 07:11:50.852441  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.852449  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:50.852455  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:50.852513  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:50.877502  303437 cri.go:89] found id: ""
	I1210 07:11:50.877522  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.877531  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:50.877537  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:50.877594  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:50.905139  303437 cri.go:89] found id: ""
	I1210 07:11:50.905161  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.905171  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:50.905177  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:50.905237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:50.933267  303437 cri.go:89] found id: ""
	I1210 07:11:50.933291  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.933299  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:50.933305  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:50.933364  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:50.961246  303437 cri.go:89] found id: ""
	I1210 07:11:50.961267  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.961276  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:50.961285  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:50.961296  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:50.989123  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:50.989149  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:51.046128  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:51.046168  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:51.060977  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:51.061014  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:51.126917  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:51.119326   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.119907   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.121515   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.122000   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.123466   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:51.119326   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.119907   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.121515   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.122000   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.123466   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:51.126938  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:51.126951  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:53.652260  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:53.662761  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:53.662827  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:53.692655  303437 cri.go:89] found id: ""
	I1210 07:11:53.692728  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.692755  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:53.692773  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:53.692852  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:53.726710  303437 cri.go:89] found id: ""
	I1210 07:11:53.726743  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.726752  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:53.726758  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:53.726816  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:53.751772  303437 cri.go:89] found id: ""
	I1210 07:11:53.751793  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.751802  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:53.751808  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:53.751867  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:53.776281  303437 cri.go:89] found id: ""
	I1210 07:11:53.776347  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.776371  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:53.776391  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:53.776475  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:53.801234  303437 cri.go:89] found id: ""
	I1210 07:11:53.801259  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.801268  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:53.801275  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:53.801330  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:53.830240  303437 cri.go:89] found id: ""
	I1210 07:11:53.830265  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.830273  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:53.830280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:53.830341  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:53.855035  303437 cri.go:89] found id: ""
	I1210 07:11:53.855059  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.855069  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:53.855075  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:53.855140  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:53.883359  303437 cri.go:89] found id: ""
	I1210 07:11:53.883384  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.883401  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:53.883411  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:53.883423  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:53.923136  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:53.923215  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:53.985138  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:53.985172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:53.999740  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:53.999775  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:54.066156  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:54.058409   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.059038   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.060575   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.061111   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.062782   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:54.058409   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.059038   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.060575   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.061111   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.062782   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:54.066181  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:54.066194  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:56.591475  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:56.601960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:56.602033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:56.626286  303437 cri.go:89] found id: ""
	I1210 07:11:56.626311  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.626320  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:56.626327  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:56.626385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:56.650098  303437 cri.go:89] found id: ""
	I1210 07:11:56.650124  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.650133  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:56.650139  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:56.650201  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:56.677542  303437 cri.go:89] found id: ""
	I1210 07:11:56.677569  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.677578  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:56.677584  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:56.677659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:56.709405  303437 cri.go:89] found id: ""
	I1210 07:11:56.709430  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.709439  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:56.709446  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:56.709508  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:56.739179  303437 cri.go:89] found id: ""
	I1210 07:11:56.739204  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.739212  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:56.739219  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:56.739277  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:56.766584  303437 cri.go:89] found id: ""
	I1210 07:11:56.766609  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.766618  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:56.766624  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:56.766691  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:56.791703  303437 cri.go:89] found id: ""
	I1210 07:11:56.791729  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.791739  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:56.791745  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:56.791809  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:56.817298  303437 cri.go:89] found id: ""
	I1210 07:11:56.817325  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.817334  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:56.817344  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:56.817355  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:56.875173  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:56.875210  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:56.889120  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:56.889146  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:56.984238  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:56.976825   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.977286   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.978762   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.979412   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.980881   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:56.976825   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.977286   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.978762   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.979412   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.980881   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:56.984258  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:56.984270  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:57.011593  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:57.011627  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:59.548660  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:59.559203  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:59.559272  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:59.584024  303437 cri.go:89] found id: ""
	I1210 07:11:59.584091  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.584113  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:59.584131  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:59.584223  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:59.609283  303437 cri.go:89] found id: ""
	I1210 07:11:59.609307  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.609316  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:59.609325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:59.609385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:59.633912  303437 cri.go:89] found id: ""
	I1210 07:11:59.633935  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.633944  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:59.633951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:59.634012  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:59.660339  303437 cri.go:89] found id: ""
	I1210 07:11:59.660365  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.660373  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:59.660380  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:59.660437  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:59.697302  303437 cri.go:89] found id: ""
	I1210 07:11:59.697329  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.697342  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:59.697348  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:59.697410  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:59.733379  303437 cri.go:89] found id: ""
	I1210 07:11:59.733402  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.733411  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:59.733418  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:59.733488  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:59.758324  303437 cri.go:89] found id: ""
	I1210 07:11:59.758350  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.758360  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:59.758366  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:59.758423  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:59.788265  303437 cri.go:89] found id: ""
	I1210 07:11:59.788304  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.788313  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:59.788323  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:59.788335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:59.816310  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:59.816335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:59.875191  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:59.875227  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:59.888706  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:59.888737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:59.964581  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:59.957369   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.958105   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959640   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959946   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.961394   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:59.957369   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.958105   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959640   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959946   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.961394   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:59.964604  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:59.964617  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:02.490529  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:02.501579  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:02.501655  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:02.530852  303437 cri.go:89] found id: ""
	I1210 07:12:02.530876  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.530885  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:02.530894  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:02.530955  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:02.561336  303437 cri.go:89] found id: ""
	I1210 07:12:02.561361  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.561370  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:02.561377  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:02.561434  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:02.585933  303437 cri.go:89] found id: ""
	I1210 07:12:02.585963  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.585972  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:02.585979  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:02.586040  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:02.611097  303437 cri.go:89] found id: ""
	I1210 07:12:02.611122  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.611131  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:02.611137  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:02.611199  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:02.637900  303437 cri.go:89] found id: ""
	I1210 07:12:02.637925  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.637934  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:02.637941  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:02.638002  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:02.669431  303437 cri.go:89] found id: ""
	I1210 07:12:02.669457  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.669467  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:02.669474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:02.669536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:02.704940  303437 cri.go:89] found id: ""
	I1210 07:12:02.704967  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.704976  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:02.704983  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:02.705044  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:02.733218  303437 cri.go:89] found id: ""
	I1210 07:12:02.733241  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.733251  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:02.733260  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:02.733271  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:02.791544  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:02.791580  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:02.805689  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:02.805716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:02.873516  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:02.865755   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.866425   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868077   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868380   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.870008   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:02.865755   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.866425   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868077   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868380   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.870008   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:02.873536  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:02.873548  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:02.898899  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:02.898932  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:05.445135  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:05.455827  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:05.455898  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:05.481329  303437 cri.go:89] found id: ""
	I1210 07:12:05.481352  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.481363  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:05.481370  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:05.481428  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:05.507339  303437 cri.go:89] found id: ""
	I1210 07:12:05.507362  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.507371  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:05.507378  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:05.507444  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:05.531971  303437 cri.go:89] found id: ""
	I1210 07:12:05.531995  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.532004  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:05.532010  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:05.532074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:05.563046  303437 cri.go:89] found id: ""
	I1210 07:12:05.563069  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.563078  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:05.563084  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:05.563147  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:05.587778  303437 cri.go:89] found id: ""
	I1210 07:12:05.587801  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.587810  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:05.587816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:05.587874  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:05.611952  303437 cri.go:89] found id: ""
	I1210 07:12:05.611973  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.611982  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:05.611988  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:05.612047  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:05.636683  303437 cri.go:89] found id: ""
	I1210 07:12:05.636705  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.636715  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:05.636721  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:05.636781  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:05.674580  303437 cri.go:89] found id: ""
	I1210 07:12:05.674609  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.674619  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:05.674628  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:05.674640  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:05.690150  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:05.690176  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:05.761058  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:05.753113   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.753757   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.755406   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.756072   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.757763   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:05.753113   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.753757   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.755406   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.756072   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.757763   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:05.761078  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:05.761090  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:05.786479  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:05.786515  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:05.814400  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:05.814426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:08.372748  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:08.382940  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:08.383032  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:08.406822  303437 cri.go:89] found id: ""
	I1210 07:12:08.406851  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.406860  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:08.406867  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:08.406931  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:08.431746  303437 cri.go:89] found id: ""
	I1210 07:12:08.431775  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.431786  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:08.431795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:08.431857  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:08.456129  303437 cri.go:89] found id: ""
	I1210 07:12:08.456152  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.456161  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:08.456167  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:08.456226  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:08.481945  303437 cri.go:89] found id: ""
	I1210 07:12:08.481981  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.481990  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:08.481997  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:08.482070  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:08.511057  303437 cri.go:89] found id: ""
	I1210 07:12:08.511080  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.511089  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:08.511095  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:08.511165  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:08.537072  303437 cri.go:89] found id: ""
	I1210 07:12:08.537094  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.537106  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:08.537113  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:08.537188  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:08.562930  303437 cri.go:89] found id: ""
	I1210 07:12:08.562961  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.562970  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:08.562992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:08.563116  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:08.587421  303437 cri.go:89] found id: ""
	I1210 07:12:08.587446  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.587455  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:08.587464  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:08.587501  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:08.646970  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:08.647003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:08.661398  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:08.661426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:08.746222  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:08.738312   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.739170   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.740871   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.741378   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.742946   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:08.738312   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.739170   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.740871   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.741378   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.742946   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:08.746254  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:08.746267  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:08.772476  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:08.772510  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:11.303459  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:11.315726  303437 out.go:203] 
	W1210 07:12:11.316890  303437 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:12:11.316924  303437 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:12:11.316933  303437 out.go:285] * Related issues:
	W1210 07:12:11.316946  303437 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:12:11.316957  303437 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:12:11.318146  303437 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229542174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229558412Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229590757Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229604525Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229613715Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229623348Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229633022Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229642441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229657818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229687390Z" level=info msg="Connect containerd service"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229958744Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.230529901Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.250111138Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.250206229Z" level=info msg="Start recovering state"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.250507327Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.251405174Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273418724Z" level=info msg="Start event monitor"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273477383Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273488378Z" level=info msg="Start streaming server"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273499069Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273508768Z" level=info msg="runtime interface starting up..."
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273515496Z" level=info msg="starting plugins..."
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273546668Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273837124Z" level=info msg="containerd successfully booted in 0.065786s"
	Dec 10 07:06:07 newest-cni-168808 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:14.551293   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:14.551835   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:14.553795   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:14.554343   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:14.556000   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:12:14 up  1:54,  0 user,  load average: 0.33, 0.48, 1.06
	Linux newest-cni-168808 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:12:10 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:11 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 10 07:12:11 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:11 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:11 newest-cni-168808 kubelet[13348]: E1210 07:12:11.728622   13348 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:11 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:11 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:12 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 10 07:12:12 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:12 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:12 newest-cni-168808 kubelet[13354]: E1210 07:12:12.484489   13354 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:12 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:12 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:13 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 10 07:12:13 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:13 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:13 newest-cni-168808 kubelet[13374]: E1210 07:12:13.219858   13374 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:13 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:13 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:13 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 10 07:12:13 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:13 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:13 newest-cni-168808 kubelet[13380]: E1210 07:12:13.969471   13380 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:13 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:13 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808: exit status 2 (368.850954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-168808" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (375.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:06:44.570988    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:08:37.012587    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:08:40.888306    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:09:19.663245    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:10:03.955302    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:11:38.876651    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:11:44.571424    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1210 07:13:28.351301    4116 config.go:182] Loaded profile config "auto-225109": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:13:37.013093    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:13:40.888620    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:14:19.663355    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236: exit status 2 (336.928682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-320236" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-320236
helpers_test.go:244: (dbg) docker inspect no-preload-320236:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	        "Created": "2025-12-10T06:50:11.529745127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296159,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:00:31.906944272Z",
	            "FinishedAt": "2025-12-10T07:00:30.524095791Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hostname",
	        "HostsPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hosts",
	        "LogPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df-json.log",
	        "Name": "/no-preload-320236",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-320236:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-320236",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	                "LowerDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-320236",
	                "Source": "/var/lib/docker/volumes/no-preload-320236/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-320236",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-320236",
	                "name.minikube.sigs.k8s.io": "no-preload-320236",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be5eb1503ed127ef0c2d044ffb245c38ab2a7657e10a797a5912ae4059c29e3f",
	            "SandboxKey": "/var/run/docker/netns/be5eb1503ed1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-320236": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:26:8b:69:77:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ffc647756423b4e81a1338ec5e1d5f1765ed1034ce5ae186c5fbcc84bf8cb09",
	                    "EndpointID": "31d9f19780654066d5dbb87109e480cce007c3d0fa04a397a4cec6b92d85ea58",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-320236",
	                        "afeff1ab0e38"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236
E1210 07:15:42.734549    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236: exit status 2 (385.252393ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-320236 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-225109 sudo systemctl status kubelet --all --full --no-pager                                                                        │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo systemctl cat kubelet --no-pager                                                                                        │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo journalctl -xeu kubelet --all --full --no-pager                                                                         │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo cat /etc/kubernetes/kubelet.conf                                                                                        │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo cat /var/lib/kubelet/config.yaml                                                                                        │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo systemctl status docker --all --full --no-pager                                                                         │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │                     │
	│ ssh     │ -p kindnet-225109 sudo systemctl cat docker --no-pager                                                                                         │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo cat /etc/docker/daemon.json                                                                                             │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │                     │
	│ ssh     │ -p kindnet-225109 sudo docker system info                                                                                                      │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │                     │
	│ ssh     │ -p kindnet-225109 sudo systemctl status cri-docker --all --full --no-pager                                                                     │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │                     │
	│ ssh     │ -p kindnet-225109 sudo systemctl cat cri-docker --no-pager                                                                                     │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │                     │
	│ ssh     │ -p kindnet-225109 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                          │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo cri-dockerd --version                                                                                                   │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo systemctl status containerd --all --full --no-pager                                                                     │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo systemctl cat containerd --no-pager                                                                                     │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo cat /lib/systemd/system/containerd.service                                                                              │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo cat /etc/containerd/config.toml                                                                                         │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo containerd config dump                                                                                                  │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo systemctl status crio --all --full --no-pager                                                                           │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │                     │
	│ ssh     │ -p kindnet-225109 sudo systemctl cat crio --no-pager                                                                                           │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                 │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ ssh     │ -p kindnet-225109 sudo crio config                                                                                                             │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ delete  │ -p kindnet-225109                                                                                                                              │ kindnet-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │ 10 Dec 25 07:15 UTC │
	│ start   │ -p flannel-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd │ flannel-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:15:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:15:37.495765  337159 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:15:37.495984  337159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:15:37.495998  337159 out.go:374] Setting ErrFile to fd 2...
	I1210 07:15:37.496004  337159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:15:37.496242  337159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 07:15:37.496653  337159 out.go:368] Setting JSON to false
	I1210 07:15:37.497478  337159 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7088,"bootTime":1765343850,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:15:37.497560  337159 start.go:143] virtualization:  
	I1210 07:15:37.501205  337159 out.go:179] * [flannel-225109] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:15:37.505523  337159 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:15:37.505699  337159 notify.go:221] Checking for updates...
	I1210 07:15:37.512133  337159 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:15:37.515289  337159 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:15:37.518279  337159 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 07:15:37.521349  337159 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:15:37.524405  337159 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:15:37.527976  337159 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:15:37.528092  337159 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:15:37.551922  337159 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:15:37.552060  337159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:15:37.608950  337159 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:15:37.598417691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:15:37.609058  337159 docker.go:319] overlay module found
	I1210 07:15:37.614111  337159 out.go:179] * Using the docker driver based on user configuration
	I1210 07:15:37.617043  337159 start.go:309] selected driver: docker
	I1210 07:15:37.617063  337159 start.go:927] validating driver "docker" against <nil>
	I1210 07:15:37.617077  337159 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:15:37.617811  337159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:15:37.673506  337159 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:15:37.664218899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:15:37.673661  337159 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:15:37.673892  337159 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:15:37.677095  337159 out.go:179] * Using Docker driver with root privileges
	I1210 07:15:37.680010  337159 cni.go:84] Creating CNI manager for "flannel"
	I1210 07:15:37.680032  337159 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1210 07:15:37.680110  337159 start.go:353] cluster config:
	{Name:flannel-225109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:flannel-225109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:15:37.683355  337159 out.go:179] * Starting "flannel-225109" primary control-plane node in "flannel-225109" cluster
	I1210 07:15:37.686200  337159 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:15:37.689131  337159 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:15:37.692006  337159 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1210 07:15:37.692111  337159 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:15:37.712381  337159 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:15:37.712405  337159 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:15:37.756518  337159 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 07:15:37.938584  337159 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 07:15:37.938769  337159 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/config.json ...
	I1210 07:15:37.938810  337159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/config.json: {Name:mka6e63c77c1ac41bdb8da23e9e6ac56690092b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:15:37.939003  337159 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:15:37.939138  337159 start.go:360] acquireMachinesLock for flannel-225109: {Name:mk98b67c059dcac1721357523ff596c28565f54f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:15:37.939055  337159 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:15:37.939204  337159 start.go:364] duration metric: took 45.876µs to acquireMachinesLock for "flannel-225109"
	I1210 07:15:37.939224  337159 start.go:93] Provisioning new machine with config: &{Name:flannel-225109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:flannel-225109 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:15:37.939286  337159 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:15:37.942798  337159 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:15:37.943061  337159 start.go:159] libmachine.API.Create for "flannel-225109" (driver="docker")
	I1210 07:15:37.943097  337159 client.go:173] LocalClient.Create starting
	I1210 07:15:37.943152  337159 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem
	I1210 07:15:37.943183  337159 main.go:143] libmachine: Decoding PEM data...
	I1210 07:15:37.943200  337159 main.go:143] libmachine: Parsing certificate...
	I1210 07:15:37.943256  337159 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem
	I1210 07:15:37.943273  337159 main.go:143] libmachine: Decoding PEM data...
	I1210 07:15:37.943285  337159 main.go:143] libmachine: Parsing certificate...
	I1210 07:15:37.943642  337159 cli_runner.go:164] Run: docker network inspect flannel-225109 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:15:37.974905  337159 cli_runner.go:211] docker network inspect flannel-225109 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:15:37.974984  337159 network_create.go:284] running [docker network inspect flannel-225109] to gather additional debugging logs...
	I1210 07:15:37.975048  337159 cli_runner.go:164] Run: docker network inspect flannel-225109
	W1210 07:15:37.994915  337159 cli_runner.go:211] docker network inspect flannel-225109 returned with exit code 1
	I1210 07:15:37.994943  337159 network_create.go:287] error running [docker network inspect flannel-225109]: docker network inspect flannel-225109: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-225109 not found
	I1210 07:15:37.994965  337159 network_create.go:289] output of [docker network inspect flannel-225109]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-225109 not found
	
	** /stderr **
	I1210 07:15:37.995106  337159 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:15:38.016642  337159 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d091f932c27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:e7:11:3f:d3:8a} reservation:<nil>}
	I1210 07:15:38.016991  337159 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6dea00dc193 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:ca:73:41:95:cc} reservation:<nil>}
	I1210 07:15:38.017291  337159 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ab13536e79d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:47:e5:9e:c6:4a} reservation:<nil>}
	I1210 07:15:38.017707  337159 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a7ab20}
	I1210 07:15:38.017739  337159 network_create.go:124] attempt to create docker network flannel-225109 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:15:38.017803  337159 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-225109 flannel-225109
	I1210 07:15:38.087793  337159 network_create.go:108] docker network flannel-225109 192.168.76.0/24 created
	I1210 07:15:38.087844  337159 kic.go:121] calculated static IP "192.168.76.2" for the "flannel-225109" container
	I1210 07:15:38.087923  337159 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:15:38.104152  337159 cli_runner.go:164] Run: docker volume create flannel-225109 --label name.minikube.sigs.k8s.io=flannel-225109 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:15:38.114007  337159 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:15:38.121265  337159 oci.go:103] Successfully created a docker volume flannel-225109
	I1210 07:15:38.121408  337159 cli_runner.go:164] Run: docker run --rm --name flannel-225109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-225109 --entrypoint /usr/bin/test -v flannel-225109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:15:38.297249  337159 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:15:38.481509  337159 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:15:38.481614  337159 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:15:38.481624  337159 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 132.235µs
	I1210 07:15:38.481632  337159 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:15:38.481643  337159 cache.go:107] acquiring lock: {Name:mkeb1fa8dab49600ef80d840b464bd8533c4cb6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:15:38.481674  337159 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 07:15:38.481679  337159 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3" took 37.637µs
	I1210 07:15:38.481685  337159 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 07:15:38.481696  337159 cache.go:107] acquiring lock: {Name:mkb1c8b0d22db746576a3ea57ea1cd2bf308d320 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:15:38.481724  337159 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 07:15:38.481729  337159 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3" took 36.07µs
	I1210 07:15:38.481735  337159 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 07:15:38.481745  337159 cache.go:107] acquiring lock: {Name:mk1c8262b3af50ea9f0658e134d5d1e45690c2ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:15:38.481769  337159 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 07:15:38.481774  337159 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3" took 30.368µs
	I1210 07:15:38.481779  337159 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 07:15:38.481789  337159 cache.go:107] acquiring lock: {Name:mkf076b1a6306c7ead02f620a535f4dce2be2a45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:15:38.481815  337159 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 07:15:38.481819  337159 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3" took 32.755µs
	I1210 07:15:38.481824  337159 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 07:15:38.481833  337159 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:15:38.481856  337159 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:15:38.481862  337159 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.277µs
	I1210 07:15:38.481867  337159 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:15:38.481875  337159 cache.go:107] acquiring lock: {Name:mk49179ee96b27fc020a2438a2984fba8f050e2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:15:38.481899  337159 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 07:15:38.481913  337159 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 29.03µs
	I1210 07:15:38.481919  337159 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 07:15:38.481927  337159 cache.go:107] acquiring lock: {Name:mk8ce68d2a56a7659694e14d150cebfb6fc3181f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:15:38.481953  337159 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 07:15:38.481957  337159 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 30.991µs
	I1210 07:15:38.481964  337159 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 07:15:38.481970  337159 cache.go:87] Successfully saved all images to host disk.
	I1210 07:15:38.685397  337159 oci.go:107] Successfully prepared a docker volume flannel-225109
	I1210 07:15:38.685464  337159 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	W1210 07:15:38.685593  337159 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:15:38.685695  337159 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:15:38.741846  337159 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-225109 --name flannel-225109 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-225109 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-225109 --network flannel-225109 --ip 192.168.76.2 --volume flannel-225109:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:15:39.046810  337159 cli_runner.go:164] Run: docker container inspect flannel-225109 --format={{.State.Running}}
	I1210 07:15:39.075907  337159 cli_runner.go:164] Run: docker container inspect flannel-225109 --format={{.State.Status}}
	I1210 07:15:39.098146  337159 cli_runner.go:164] Run: docker exec flannel-225109 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:15:39.152619  337159 oci.go:144] the created container "flannel-225109" has a running status.
	I1210 07:15:39.152646  337159 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/flannel-225109/id_rsa...
	I1210 07:15:39.336029  337159 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-2307/.minikube/machines/flannel-225109/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:15:39.361912  337159 cli_runner.go:164] Run: docker container inspect flannel-225109 --format={{.State.Status}}
	I1210 07:15:39.393489  337159 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:15:39.393509  337159 kic_runner.go:114] Args: [docker exec --privileged flannel-225109 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:15:39.456619  337159 cli_runner.go:164] Run: docker container inspect flannel-225109 --format={{.State.Status}}
	I1210 07:15:39.476939  337159 machine.go:94] provisionDockerMachine start ...
	I1210 07:15:39.477028  337159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-225109
	I1210 07:15:39.499995  337159 main.go:143] libmachine: Using SSH client type: native
	I1210 07:15:39.500366  337159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1210 07:15:39.500385  337159 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:15:39.500994  337159 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46988->127.0.0.1:33118: read: connection reset by peer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777113414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777127404Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777160594Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777174535Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777184742Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777195950Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777205197Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777215487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777231528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777260304Z" level=info msg="Connect containerd service"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777515527Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.778069290Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.789502105Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.789748787Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.789677541Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.795087082Z" level=info msg="Start recovering state"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.809745847Z" level=info msg="Start event monitor"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.809929530Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810001120Z" level=info msg="Start streaming server"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810060181Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810114328Z" level=info msg="runtime interface starting up..."
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810165307Z" level=info msg="starting plugins..."
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810240475Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:00:37 no-preload-320236 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.811841962Z" level=info msg="containerd successfully booted in 0.055335s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:15:43.690783    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:15:43.691496    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:15:43.693810    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:15:43.694263    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:15:43.695827    8126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:15:43 up  1:58,  0 user,  load average: 1.48, 1.16, 1.22
	Linux no-preload-320236 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:15:40 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:15:41 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1202.
	Dec 10 07:15:41 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:15:41 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:15:41 no-preload-320236 kubelet[7987]: E1210 07:15:41.207291    7987 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:15:41 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:15:41 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:15:41 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 10 07:15:41 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:15:41 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:15:41 no-preload-320236 kubelet[7993]: E1210 07:15:41.953018    7993 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:15:41 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:15:41 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:15:42 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 10 07:15:42 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:15:42 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:15:42 no-preload-320236 kubelet[8014]: E1210 07:15:42.751849    8014 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:15:42 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:15:42 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:15:43 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 10 07:15:43 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:15:43 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:15:43 no-preload-320236 kubelet[8088]: E1210 07:15:43.498274    8088 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:15:43 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:15:43 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236: exit status 2 (428.497285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-320236" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-168808 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808: exit status 2 (330.481874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-168808 -n newest-cni-168808
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-168808 -n newest-cni-168808: exit status 2 (321.923894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-168808 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808: exit status 2 (316.395186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-168808 -n newest-cni-168808
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-168808 -n newest-cni-168808: exit status 2 (303.276065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-168808
helpers_test.go:244: (dbg) docker inspect newest-cni-168808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3",
	        "Created": "2025-12-10T06:55:56.205654512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303574,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:06:01.504514541Z",
	            "FinishedAt": "2025-12-10T07:05:59.862084086Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/hosts",
	        "LogPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3-json.log",
	        "Name": "/newest-cni-168808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-168808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-168808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3",
	                "LowerDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-168808",
	                "Source": "/var/lib/docker/volumes/newest-cni-168808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-168808",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-168808",
	                "name.minikube.sigs.k8s.io": "newest-cni-168808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "515b233ea68ef1c9ed300584d10d72421aa77f4775a69279a293bdf725b2e113",
	            "SandboxKey": "/var/run/docker/netns/515b233ea68e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-168808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:e3:f7:16:bb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fedd4ad26097ebf6757101ef8e22a141acd4ba740aa95d5f1eab7ffc232007f5",
	                    "EndpointID": "058f1c535f16248f59aad5f1fc5aceccd4ce55e84235161b803daa93fdc8a70f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-168808",
	                        "7d1db3aa80a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808: exit status 2 (336.46597ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-168808 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-168808 logs -n 25: (1.517821806s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p disable-driver-mounts-595993                                                                                                                                                                                                                          │ disable-driver-mounts-595993 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ stop    │ -p default-k8s-diff-port-395269 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-395269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:55 UTC │
	│ image   │ default-k8s-diff-port-395269 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ pause   │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ unpause │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-320236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │                     │
	│ stop    │ -p no-preload-320236 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ addons  │ enable dashboard -p no-preload-320236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ start   │ -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-168808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:04 UTC │                     │
	│ stop    │ -p newest-cni-168808 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │ 10 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-168808 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:06 UTC │ 10 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:06 UTC │                     │
	│ image   │ newest-cni-168808 image list --format=json                                                                                                                                                                                                               │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:12 UTC │ 10 Dec 25 07:12 UTC │
	│ pause   │ -p newest-cni-168808 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:12 UTC │ 10 Dec 25 07:12 UTC │
	│ unpause │ -p newest-cni-168808 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:12 UTC │ 10 Dec 25 07:12 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:06:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:06:00.999721  303437 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:06:00.999928  303437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:06:00.999941  303437 out.go:374] Setting ErrFile to fd 2...
	I1210 07:06:00.999948  303437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:06:01.000291  303437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 07:06:01.000840  303437 out.go:368] Setting JSON to false
	I1210 07:06:01.001958  303437 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6511,"bootTime":1765343850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:06:01.002049  303437 start.go:143] virtualization:  
	I1210 07:06:01.005229  303437 out.go:179] * [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:06:01.009127  303437 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:06:01.009191  303437 notify.go:221] Checking for updates...
	I1210 07:06:01.015115  303437 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:06:01.018047  303437 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:01.021396  303437 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 07:06:01.024347  303437 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:06:01.027298  303437 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:06:01.030670  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:01.031359  303437 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:06:01.059280  303437 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:06:01.059409  303437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:06:01.117784  303437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:06:01.1083965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:06:01.117913  303437 docker.go:319] overlay module found
	I1210 07:06:01.121244  303437 out.go:179] * Using the docker driver based on existing profile
	I1210 07:06:01.124129  303437 start.go:309] selected driver: docker
	I1210 07:06:01.124152  303437 start.go:927] validating driver "docker" against &{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:01.124257  303437 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:06:01.124971  303437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:06:01.177684  303437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:06:01.168448125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:06:01.178039  303437 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:06:01.178072  303437 cni.go:84] Creating CNI manager for ""
	I1210 07:06:01.178124  303437 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:06:01.178165  303437 start.go:353] cluster config:
	{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:01.183109  303437 out.go:179] * Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	I1210 07:06:01.185906  303437 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:06:01.188882  303437 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:06:01.191653  303437 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:06:01.191725  303437 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:06:01.211624  303437 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:06:01.211647  303437 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:06:01.245655  303437 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 07:06:01.410333  303437 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 07:06:01.410482  303437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 07:06:01.410710  303437 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:06:01.410741  303437 start.go:360] acquireMachinesLock for newest-cni-168808: {Name:mk3e9e7ddd89f37944cb01c45725514e76e5ba82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:01.410794  303437 start.go:364] duration metric: took 32.001µs to acquireMachinesLock for "newest-cni-168808"
	I1210 07:06:01.410811  303437 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:06:01.410817  303437 fix.go:54] fixHost starting: 
	I1210 07:06:01.411108  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:01.411381  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.445269  303437 fix.go:112] recreateIfNeeded on newest-cni-168808: state=Stopped err=<nil>
	W1210 07:06:01.445299  303437 fix.go:138] unexpected machine state, will restart: <nil>
	W1210 07:05:57.413623  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:59.413665  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:01.448589  303437 out.go:252] * Restarting existing docker container for "newest-cni-168808" ...
	I1210 07:06:01.448678  303437 cli_runner.go:164] Run: docker start newest-cni-168808
	I1210 07:06:01.609744  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.770299  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:01.790186  303437 kic.go:430] container "newest-cni-168808" state is running.
	I1210 07:06:01.790574  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:01.816467  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.816783  303437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 07:06:01.816990  303437 machine.go:94] provisionDockerMachine start ...
	I1210 07:06:01.817053  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:01.864829  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:01.865171  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:01.865181  303437 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:06:01.865918  303437 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:06:02.031349  303437 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031449  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:06:02.031458  303437 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.682µs
	I1210 07:06:02.031466  303437 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:06:02.031488  303437 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031520  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:06:02.031525  303437 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 49.765µs
	I1210 07:06:02.031536  303437 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031546  303437 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031572  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:06:02.031577  303437 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 32µs
	I1210 07:06:02.031583  303437 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031592  303437 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031616  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:06:02.031621  303437 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 30.351µs
	I1210 07:06:02.031626  303437 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031635  303437 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031658  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 07:06:02.031663  303437 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 29.047µs
	I1210 07:06:02.031668  303437 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031676  303437 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031702  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:06:02.031711  303437 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.042µs
	I1210 07:06:02.031716  303437 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:06:02.031725  303437 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031752  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 07:06:02.031757  303437 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 32.509µs
	I1210 07:06:02.031762  303437 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 07:06:02.031770  303437 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031794  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:06:02.031799  303437 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.973µs
	I1210 07:06:02.031809  303437 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:06:02.031817  303437 cache.go:87] Successfully saved all images to host disk.
	I1210 07:06:05.019038  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 07:06:05.019065  303437 ubuntu.go:182] provisioning hostname "newest-cni-168808"
	I1210 07:06:05.019142  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.038167  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:05.038497  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:05.038514  303437 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-168808 && echo "newest-cni-168808" | sudo tee /etc/hostname
	I1210 07:06:05.212495  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 07:06:05.212574  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.236676  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:05.236997  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:05.237020  303437 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-168808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-168808/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-168808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:06:05.387591  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:06:05.387661  303437 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 07:06:05.387701  303437 ubuntu.go:190] setting up certificates
	I1210 07:06:05.387718  303437 provision.go:84] configureAuth start
	I1210 07:06:05.387781  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:05.406720  303437 provision.go:143] copyHostCerts
	I1210 07:06:05.406812  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 07:06:05.406827  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 07:06:05.406903  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 07:06:05.407068  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 07:06:05.407080  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 07:06:05.407115  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 07:06:05.409257  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 07:06:05.409288  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 07:06:05.409367  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 07:06:05.409470  303437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-168808 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-168808]
	I1210 07:06:05.457283  303437 provision.go:177] copyRemoteCerts
	I1210 07:06:05.457369  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:06:05.457416  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.474754  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.578879  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:06:05.596686  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:06:05.614316  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:06:05.632529  303437 provision.go:87] duration metric: took 244.787433ms to configureAuth
	I1210 07:06:05.632557  303437 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:06:05.632770  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:05.632780  303437 machine.go:97] duration metric: took 3.815782677s to provisionDockerMachine
	I1210 07:06:05.632794  303437 start.go:293] postStartSetup for "newest-cni-168808" (driver="docker")
	I1210 07:06:05.632814  303437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:06:05.632866  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:06:05.632909  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.651511  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.755084  303437 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:06:05.758541  303437 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:06:05.758569  303437 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:06:05.758581  303437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 07:06:05.758636  303437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 07:06:05.758716  303437 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 07:06:05.758818  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:06:05.766638  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:06:05.784153  303437 start.go:296] duration metric: took 151.337167ms for postStartSetup
	I1210 07:06:05.784245  303437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:06:05.784296  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.801680  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.903956  303437 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:06:05.910414  303437 fix.go:56] duration metric: took 4.499590898s for fixHost
	I1210 07:06:05.910487  303437 start.go:83] releasing machines lock for "newest-cni-168808", held for 4.499684126s
	I1210 07:06:05.910597  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:05.931294  303437 ssh_runner.go:195] Run: cat /version.json
	I1210 07:06:05.931352  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.933029  303437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:06:05.933104  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.966773  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.968660  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	W1210 07:06:01.914114  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:04.412714  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:06.413234  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:06.164421  303437 ssh_runner.go:195] Run: systemctl --version
	I1210 07:06:06.170684  303437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:06:06.174920  303437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:06:06.174984  303437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:06:06.182557  303437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:06:06.182578  303437 start.go:496] detecting cgroup driver to use...
	I1210 07:06:06.182611  303437 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:06:06.182660  303437 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:06:06.200334  303437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:06:06.213740  303437 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:06:06.213811  303437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:06:06.229308  303437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:06:06.242262  303437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:06:06.362603  303437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:06:06.483045  303437 docker.go:234] disabling docker service ...
	I1210 07:06:06.483112  303437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:06:06.498250  303437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:06:06.511747  303437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:06:06.628460  303437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:06:06.766872  303437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:06:06.779978  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:06:06.794352  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:06.943808  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:06:06.954116  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:06:06.962677  303437 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:06:06.962740  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:06:06.971255  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:06:06.980030  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:06:06.988476  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:06:07.007850  303437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:06:07.016475  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:06:07.025456  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:06:07.034855  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:06:07.044266  303437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:06:07.052503  303437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:06:07.060278  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:07.175410  303437 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:06:07.276715  303437 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:06:07.276786  303437 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:06:07.280624  303437 start.go:564] Will wait 60s for crictl version
	I1210 07:06:07.280687  303437 ssh_runner.go:195] Run: which crictl
	I1210 07:06:07.284270  303437 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:06:07.312279  303437 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:06:07.312345  303437 ssh_runner.go:195] Run: containerd --version
	I1210 07:06:07.332603  303437 ssh_runner.go:195] Run: containerd --version
	I1210 07:06:07.358017  303437 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 07:06:07.360940  303437 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:06:07.377362  303437 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:06:07.381128  303437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:06:07.393654  303437 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:06:07.396326  303437 kubeadm.go:884] updating cluster {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:06:07.396576  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.559787  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.709730  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.859001  303437 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:06:07.859128  303437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:06:07.883821  303437 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:06:07.883846  303437 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:06:07.883855  303437 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 07:06:07.883958  303437 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-168808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:06:07.884031  303437 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:06:07.913929  303437 cni.go:84] Creating CNI manager for ""
	I1210 07:06:07.913952  303437 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:06:07.913973  303437 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:06:07.913999  303437 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-168808 NodeName:newest-cni-168808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:06:07.914120  303437 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-168808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:06:07.914189  303437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:06:07.921856  303437 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:06:07.921924  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:06:07.929166  303437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 07:06:07.941324  303437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:06:07.954047  303437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1210 07:06:07.966208  303437 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:06:07.969747  303437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:06:07.979238  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:08.094271  303437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:06:08.111901  303437 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808 for IP: 192.168.76.2
	I1210 07:06:08.111935  303437 certs.go:195] generating shared ca certs ...
	I1210 07:06:08.111952  303437 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.112156  303437 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 07:06:08.112239  303437 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 07:06:08.112261  303437 certs.go:257] generating profile certs ...
	I1210 07:06:08.112411  303437 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key
	I1210 07:06:08.112508  303437 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb
	I1210 07:06:08.112594  303437 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key
	I1210 07:06:08.112776  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 07:06:08.112825  303437 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 07:06:08.112863  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:06:08.112899  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:06:08.112950  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:06:08.112979  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 07:06:08.113053  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:06:08.113737  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:06:08.131868  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:06:08.149347  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:06:08.173211  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:06:08.201112  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:06:08.217931  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:06:08.234927  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:06:08.255525  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:06:08.274117  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 07:06:08.291924  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:06:08.309223  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 07:06:08.326082  303437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:06:08.338602  303437 ssh_runner.go:195] Run: openssl version
	I1210 07:06:08.345277  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.353152  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:06:08.360717  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.364534  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.364612  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.406623  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:06:08.414672  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.422361  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 07:06:08.430022  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.433878  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.433973  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.475572  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:06:08.483285  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.491000  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 07:06:08.498512  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.502241  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.502306  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.543558  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:06:08.551469  303437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:06:08.555461  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:06:08.597134  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:06:08.638002  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:06:08.678965  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:06:08.720427  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:06:08.763492  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:06:08.809518  303437 kubeadm.go:401] StartCluster: {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:08.809633  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:06:08.809696  303437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:06:08.836487  303437 cri.go:89] found id: ""
	I1210 07:06:08.836609  303437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:06:08.844505  303437 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:06:08.844525  303437 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:06:08.844604  303437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:06:08.852026  303437 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:06:08.852667  303437 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-168808" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:08.852944  303437 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-168808" cluster setting kubeconfig missing "newest-cni-168808" context setting]
	I1210 07:06:08.853395  303437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.854743  303437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:06:08.863687  303437 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 07:06:08.863719  303437 kubeadm.go:602] duration metric: took 19.187765ms to restartPrimaryControlPlane
	I1210 07:06:08.863729  303437 kubeadm.go:403] duration metric: took 54.219605ms to StartCluster
	I1210 07:06:08.863764  303437 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.863854  303437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:08.864943  303437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.865201  303437 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:06:08.865553  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:08.865626  303437 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:06:08.865710  303437 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-168808"
	I1210 07:06:08.865725  303437 addons.go:70] Setting dashboard=true in profile "newest-cni-168808"
	I1210 07:06:08.865738  303437 addons.go:70] Setting default-storageclass=true in profile "newest-cni-168808"
	I1210 07:06:08.865748  303437 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-168808"
	I1210 07:06:08.865755  303437 addons.go:239] Setting addon dashboard=true in "newest-cni-168808"
	W1210 07:06:08.865763  303437 addons.go:248] addon dashboard should already be in state true
	I1210 07:06:08.865787  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.866234  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.865732  303437 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-168808"
	I1210 07:06:08.866264  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.866892  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.866245  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.870618  303437 out.go:179] * Verifying Kubernetes components...
	I1210 07:06:08.877218  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:08.909365  303437 addons.go:239] Setting addon default-storageclass=true in "newest-cni-168808"
	I1210 07:06:08.909422  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.909955  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.935168  303437 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:06:08.938081  303437 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:06:08.938245  303437 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:06:08.941690  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:06:08.941720  303437 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:06:08.941756  303437 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:08.941772  303437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:06:08.941809  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:08.941835  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:08.974920  303437 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:08.974945  303437 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:06:08.975007  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:09.018425  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.019111  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.028670  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.182128  303437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:06:09.189848  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:09.218621  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:06:09.218696  303437 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:06:09.233237  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:09.248580  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:06:09.248655  303437 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:06:09.280152  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:06:09.280225  303437 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:06:09.294171  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:06:09.294239  303437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:06:09.308986  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:06:09.309057  303437 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:06:09.323118  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:06:09.323195  303437 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:06:09.337212  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:06:09.337284  303437 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:06:09.351939  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:06:09.352006  303437 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:06:09.364684  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:09.364749  303437 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:06:09.377472  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:09.912036  303437 api_server.go:52] waiting for apiserver process to appear ...
	W1210 07:06:09.912102  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912165  303437 retry.go:31] will retry after 137.554553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:09.912180  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912239  303437 retry.go:31] will retry after 162.08127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912111  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:09.912371  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912391  303437 retry.go:31] will retry after 156.096194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.049986  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:10.068682  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:10.075250  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:10.139495  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.139526  303437 retry.go:31] will retry after 525.238587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.196161  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.196246  303437 retry.go:31] will retry after 422.355289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.196206  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.196316  303437 retry.go:31] will retry after 388.387448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.412254  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:10.585608  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:10.619095  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:10.648889  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.648984  303437 retry.go:31] will retry after 452.281973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.665111  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:10.718838  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.718922  303437 retry.go:31] will retry after 323.626302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.751170  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.751201  303437 retry.go:31] will retry after 426.205037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.912296  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:08.413486  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:10.912684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:11.043189  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:11.101706  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:11.108011  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.108097  303437 retry.go:31] will retry after 465.500211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:11.171627  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.171733  303437 retry.go:31] will retry after 644.635053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.177835  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:11.248736  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.248773  303437 retry.go:31] will retry after 646.277835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.413044  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:11.574386  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:11.635719  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.635755  303437 retry.go:31] will retry after 992.827501ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.816838  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:11.874310  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.874341  303437 retry.go:31] will retry after 847.092889ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.895446  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:11.912890  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:11.979233  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.979274  303437 retry.go:31] will retry after 1.723803171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.412929  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:12.629708  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:12.711328  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.711402  303437 retry.go:31] will retry after 1.682909305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.721580  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:12.787715  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.787755  303437 retry.go:31] will retry after 1.523563907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.912980  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:13.412270  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:13.704137  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:13.769291  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:13.769319  303437 retry.go:31] will retry after 2.655752177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:13.912604  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:14.312036  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:14.379977  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.380010  303437 retry.go:31] will retry after 2.120509482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.395420  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:14.412979  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:14.494970  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.495005  303437 retry.go:31] will retry after 2.083776468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.913027  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:15.412429  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:15.912376  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:12.913304  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:15.412868  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:16.412255  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:16.425325  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:16.500296  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.500325  303437 retry.go:31] will retry after 1.753545178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.501400  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:16.562473  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.562506  303437 retry.go:31] will retry after 5.63085781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.579894  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:16.640721  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.640756  303437 retry.go:31] will retry after 2.710169887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.912245  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:17.412350  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:17.913142  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:18.254741  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:18.317147  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:18.317176  303437 retry.go:31] will retry after 6.057763532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:18.412287  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:18.912752  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:19.352062  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:19.412870  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:19.413382  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:19.413410  303437 retry.go:31] will retry after 6.763226999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:19.913016  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:20.412997  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:20.913098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:17.413684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:19.913294  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:21.412278  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:21.913122  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:22.194391  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:22.251091  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:22.251123  303437 retry.go:31] will retry after 9.11395006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:22.412163  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:22.912351  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:23.412284  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:23.913156  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:24.375236  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:24.412827  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:24.440293  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:24.440322  303437 retry.go:31] will retry after 9.4401753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:24.912889  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:25.412233  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:25.912307  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:21.913508  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:23.913605  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:26.413204  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:26.177306  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:26.250932  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:26.250965  303437 retry.go:31] will retry after 5.997165797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:26.412268  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:26.912230  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:27.412900  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:27.912402  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:28.412186  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:28.912521  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:29.412227  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:29.912255  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:30.413237  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:30.912254  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:28.413461  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:30.913644  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:31.366162  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:31.412559  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:31.439835  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:31.439865  303437 retry.go:31] will retry after 9.181638872s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:31.912411  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:32.248486  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:32.313416  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:32.313450  303437 retry.go:31] will retry after 9.93876945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:32.412880  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:32.912746  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:33.412590  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:33.880694  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:33.912312  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:33.964338  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:33.964372  303437 retry.go:31] will retry after 6.698338092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:34.413098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:34.912991  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:35.413188  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:35.912404  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:33.413489  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:35.913510  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:38.413592  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:40.413124  296020 node_ready.go:38] duration metric: took 6m0.00088218s for node "no-preload-320236" to be "Ready" ...
	I1210 07:06:40.416430  296020 out.go:203] 
	W1210 07:06:40.419386  296020 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:06:40.419405  296020 out.go:285] * 
	W1210 07:06:40.421537  296020 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:06:40.424792  296020 out.go:203] 
	I1210 07:06:36.412320  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:36.912280  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:37.412192  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:37.912490  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:38.412402  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:38.912902  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:39.412781  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:39.912868  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:40.413057  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:40.621960  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:40.663144  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:40.779058  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.779095  303437 retry.go:31] will retry after 16.870406936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:40.830377  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.830410  303437 retry.go:31] will retry after 13.844749205s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.912652  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:41.412296  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:41.912802  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:42.252520  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:42.323589  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:42.323630  303437 retry.go:31] will retry after 27.422515535s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:42.412805  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:42.912953  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:43.412903  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:43.912754  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:44.412272  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:44.912265  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:45.412790  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:45.912791  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:46.413202  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:46.912321  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:47.412292  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:47.912507  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:48.412885  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:48.912342  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:49.413070  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:49.912837  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:50.412236  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:50.912907  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:51.412287  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:51.913181  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:52.412208  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:52.912275  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:53.412923  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:53.912230  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:54.412280  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:54.676234  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:54.749679  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:54.749717  303437 retry.go:31] will retry after 32.358913109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:54.913072  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:55.412886  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:55.913073  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:56.412961  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:56.912198  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:57.412942  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:57.649751  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:57.723910  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:57.723937  303437 retry.go:31] will retry after 19.76255611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:57.912185  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:58.412253  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:58.912817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:59.412285  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:59.912592  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:00.412249  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:00.912270  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:01.412382  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:01.912282  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:02.412190  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:02.912865  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:03.412818  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:03.912286  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:04.412820  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:04.913148  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:05.412411  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:05.912250  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:06.412297  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:06.913174  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:07.412239  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:07.912324  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:08.412210  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:08.912197  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:08.912278  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:08.940273  303437 cri.go:89] found id: ""
	I1210 07:07:08.940300  303437 logs.go:282] 0 containers: []
	W1210 07:07:08.940309  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:08.940316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:08.940374  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:08.976821  303437 cri.go:89] found id: ""
	I1210 07:07:08.976848  303437 logs.go:282] 0 containers: []
	W1210 07:07:08.976857  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:08.976863  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:08.976928  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:09.004516  303437 cri.go:89] found id: ""
	I1210 07:07:09.004546  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.004555  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:09.004561  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:09.004633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:09.029569  303437 cri.go:89] found id: ""
	I1210 07:07:09.029593  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.029602  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:09.029609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:09.029666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:09.055232  303437 cri.go:89] found id: ""
	I1210 07:07:09.055256  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.055265  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:09.055281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:09.055342  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:09.080957  303437 cri.go:89] found id: ""
	I1210 07:07:09.080978  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.080986  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:09.080992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:09.081051  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:09.105491  303437 cri.go:89] found id: ""
	I1210 07:07:09.105561  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.105583  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:09.105603  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:09.105682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:09.129839  303437 cri.go:89] found id: ""
	I1210 07:07:09.129861  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.129870  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:09.129879  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:09.129890  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:09.157418  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:09.157444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:09.218619  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:09.218655  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:09.233569  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:09.233598  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:09.299933  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:09.291172    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.291836    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.293591    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.295225    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.296375    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:09.291172    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.291836    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.293591    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.295225    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.296375    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:09.299954  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:09.299968  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:09.746365  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:07:09.810849  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:09.810882  303437 retry.go:31] will retry after 38.106772232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:11.825038  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:11.835407  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:11.835491  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:11.859384  303437 cri.go:89] found id: ""
	I1210 07:07:11.859407  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.859416  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:11.859422  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:11.859482  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:11.883645  303437 cri.go:89] found id: ""
	I1210 07:07:11.883667  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.883677  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:11.883683  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:11.883746  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:11.912907  303437 cri.go:89] found id: ""
	I1210 07:07:11.912987  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.913010  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:11.913029  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:11.913135  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:11.954332  303437 cri.go:89] found id: ""
	I1210 07:07:11.954354  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.954363  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:11.954369  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:11.954447  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:11.987932  303437 cri.go:89] found id: ""
	I1210 07:07:11.988008  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.988024  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:11.988048  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:11.988134  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:12.016019  303437 cri.go:89] found id: ""
	I1210 07:07:12.016043  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.016052  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:12.016059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:12.016161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:12.041574  303437 cri.go:89] found id: ""
	I1210 07:07:12.041616  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.041625  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:12.041633  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:12.041702  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:12.067242  303437 cri.go:89] found id: ""
	I1210 07:07:12.067309  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.067335  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:12.067351  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:12.067368  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:12.080423  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:12.080492  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:12.142902  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:12.135099    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.135777    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.137430    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.138066    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.139703    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:12.135099    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.135777    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.137430    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.138066    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.139703    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:12.142926  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:12.142940  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:12.170013  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:12.170095  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:12.205843  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:12.205871  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:14.769151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:14.779543  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:14.779628  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:14.804854  303437 cri.go:89] found id: ""
	I1210 07:07:14.804877  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.804885  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:14.804892  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:14.804951  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:14.829499  303437 cri.go:89] found id: ""
	I1210 07:07:14.829521  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.829529  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:14.829535  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:14.829592  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:14.857960  303437 cri.go:89] found id: ""
	I1210 07:07:14.857984  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.857993  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:14.858000  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:14.858058  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:14.882942  303437 cri.go:89] found id: ""
	I1210 07:07:14.882964  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.882972  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:14.882978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:14.883074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:14.906556  303437 cri.go:89] found id: ""
	I1210 07:07:14.906582  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.906591  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:14.906598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:14.906653  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:14.944744  303437 cri.go:89] found id: ""
	I1210 07:07:14.944771  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.944780  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:14.944796  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:14.944859  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:14.974225  303437 cri.go:89] found id: ""
	I1210 07:07:14.974248  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.974256  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:14.974263  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:14.974323  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:15.005431  303437 cri.go:89] found id: ""
	I1210 07:07:15.005515  303437 logs.go:282] 0 containers: []
	W1210 07:07:15.005539  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:15.005564  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:15.005607  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:15.075329  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:15.067284    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.067872    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069470    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069869    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.071556    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:15.067284    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.067872    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069470    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069869    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.071556    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:15.075363  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:15.075376  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:15.100635  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:15.100670  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:15.129987  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:15.130013  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:15.198219  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:15.198300  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:17.487235  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:07:17.543553  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:17.543587  303437 retry.go:31] will retry after 31.69876155s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:17.712834  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:17.723193  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:17.723262  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:17.747430  303437 cri.go:89] found id: ""
	I1210 07:07:17.747453  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.747462  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:17.747468  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:17.747525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:17.771960  303437 cri.go:89] found id: ""
	I1210 07:07:17.771982  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.771990  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:17.771996  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:17.772060  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:17.796155  303437 cri.go:89] found id: ""
	I1210 07:07:17.796176  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.796184  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:17.796190  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:17.796251  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:17.825359  303437 cri.go:89] found id: ""
	I1210 07:07:17.825385  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.825394  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:17.825401  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:17.825462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:17.853147  303437 cri.go:89] found id: ""
	I1210 07:07:17.853170  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.853178  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:17.853184  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:17.853243  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:17.878806  303437 cri.go:89] found id: ""
	I1210 07:07:17.878830  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.878839  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:17.878846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:17.878905  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:17.902975  303437 cri.go:89] found id: ""
	I1210 07:07:17.902999  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.903007  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:17.903037  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:17.903112  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:17.934568  303437 cri.go:89] found id: ""
	I1210 07:07:17.934592  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.934600  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:17.934610  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:17.934621  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:17.999695  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:17.999740  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:18.029219  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:18.029256  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:18.094199  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:18.085818    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.086373    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.088284    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.089006    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.090767    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:18.085818    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.086373    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.088284    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.089006    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.090767    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:18.094223  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:18.094238  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:18.120245  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:18.120283  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:20.649514  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:20.661165  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:20.661236  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:20.686549  303437 cri.go:89] found id: ""
	I1210 07:07:20.686572  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.686581  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:20.686587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:20.686654  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:20.711873  303437 cri.go:89] found id: ""
	I1210 07:07:20.711895  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.711903  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:20.711910  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:20.711968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:20.736261  303437 cri.go:89] found id: ""
	I1210 07:07:20.736283  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.736292  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:20.736298  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:20.736360  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:20.765759  303437 cri.go:89] found id: ""
	I1210 07:07:20.765781  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.765797  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:20.765804  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:20.765862  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:20.793639  303437 cri.go:89] found id: ""
	I1210 07:07:20.793661  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.793669  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:20.793675  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:20.793751  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:20.818318  303437 cri.go:89] found id: ""
	I1210 07:07:20.818339  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.818347  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:20.818354  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:20.818417  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:20.843499  303437 cri.go:89] found id: ""
	I1210 07:07:20.843523  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.843533  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:20.843539  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:20.843598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:20.868745  303437 cri.go:89] found id: ""
	I1210 07:07:20.868768  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.868776  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:20.868785  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:20.868796  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:20.897905  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:20.897981  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:20.962576  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:20.962654  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:20.977746  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:20.977835  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:21.045052  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:21.037239    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.037840    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.039622    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.040051    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.041574    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:21.037239    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.037840    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.039622    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.040051    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.041574    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:21.045073  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:21.045085  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:23.570777  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:23.580946  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:23.581021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:23.605355  303437 cri.go:89] found id: ""
	I1210 07:07:23.605379  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.605388  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:23.605394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:23.605451  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:23.632675  303437 cri.go:89] found id: ""
	I1210 07:07:23.632697  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.632706  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:23.632713  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:23.632783  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:23.656579  303437 cri.go:89] found id: ""
	I1210 07:07:23.656602  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.656610  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:23.656617  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:23.656675  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:23.684796  303437 cri.go:89] found id: ""
	I1210 07:07:23.684816  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.684825  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:23.684832  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:23.684893  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:23.709043  303437 cri.go:89] found id: ""
	I1210 07:07:23.709064  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.709073  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:23.709079  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:23.709149  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:23.733315  303437 cri.go:89] found id: ""
	I1210 07:07:23.733340  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.733348  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:23.733355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:23.733413  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:23.761492  303437 cri.go:89] found id: ""
	I1210 07:07:23.761514  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.761524  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:23.761530  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:23.761586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:23.786489  303437 cri.go:89] found id: ""
	I1210 07:07:23.786511  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.786520  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:23.786530  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:23.786540  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:23.812193  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:23.812231  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:23.842956  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:23.842990  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:23.898018  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:23.898052  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:23.912477  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:23.912507  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:23.996757  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:23.988502    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.989251    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.990734    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.991360    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.993170    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:23.988502    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.989251    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.990734    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.991360    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.993170    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:26.497835  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:26.508472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:26.508547  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:26.533241  303437 cri.go:89] found id: ""
	I1210 07:07:26.533264  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.533272  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:26.533279  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:26.533337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:26.558844  303437 cri.go:89] found id: ""
	I1210 07:07:26.558868  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.558877  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:26.558883  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:26.558941  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:26.584008  303437 cri.go:89] found id: ""
	I1210 07:07:26.584042  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.584051  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:26.584058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:26.584176  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:26.609123  303437 cri.go:89] found id: ""
	I1210 07:07:26.609145  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.609153  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:26.609160  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:26.609220  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:26.633105  303437 cri.go:89] found id: ""
	I1210 07:07:26.633127  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.633136  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:26.633142  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:26.633220  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:26.662834  303437 cri.go:89] found id: ""
	I1210 07:07:26.662858  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.662875  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:26.662897  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:26.662989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:26.688296  303437 cri.go:89] found id: ""
	I1210 07:07:26.688318  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.688326  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:26.688332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:26.688401  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:26.714475  303437 cri.go:89] found id: ""
	I1210 07:07:26.714545  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.714564  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:26.714595  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:26.714609  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:26.769794  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:26.769827  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:26.782871  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:26.782909  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:26.843846  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:26.836227    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.836952    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838581    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838893    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.840461    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:26.836227    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.836952    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838581    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838893    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.840461    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:26.843867  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:26.843881  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:26.869319  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:26.869353  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:27.109532  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:07:27.174544  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:27.174590  303437 retry.go:31] will retry after 31.997742819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:29.396194  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:29.406428  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:29.406498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:29.433424  303437 cri.go:89] found id: ""
	I1210 07:07:29.433455  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.433465  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:29.433471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:29.433536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:29.463589  303437 cri.go:89] found id: ""
	I1210 07:07:29.463615  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.463624  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:29.463630  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:29.463686  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:29.492343  303437 cri.go:89] found id: ""
	I1210 07:07:29.492365  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.492374  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:29.492380  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:29.492437  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:29.516069  303437 cri.go:89] found id: ""
	I1210 07:07:29.516097  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.516106  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:29.516113  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:29.516171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:29.539661  303437 cri.go:89] found id: ""
	I1210 07:07:29.539693  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.539703  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:29.539712  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:29.539781  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:29.563791  303437 cri.go:89] found id: ""
	I1210 07:07:29.563814  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.563823  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:29.563829  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:29.563887  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:29.589136  303437 cri.go:89] found id: ""
	I1210 07:07:29.589160  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.589168  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:29.589175  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:29.589233  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:29.614701  303437 cri.go:89] found id: ""
	I1210 07:07:29.614724  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.614734  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:29.614743  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:29.614756  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:29.670207  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:29.670240  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:29.683977  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:29.684005  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:29.748039  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:29.739856    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.740576    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742311    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742892    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.744661    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:29.739856    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.740576    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742311    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742892    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.744661    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:29.748061  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:29.748077  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:29.772992  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:29.773024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:32.300508  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:32.310795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:32.310865  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:32.334361  303437 cri.go:89] found id: ""
	I1210 07:07:32.334387  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.334396  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:32.334403  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:32.334478  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:32.361534  303437 cri.go:89] found id: ""
	I1210 07:07:32.361627  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.361651  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:32.361681  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:32.361764  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:32.386488  303437 cri.go:89] found id: ""
	I1210 07:07:32.386513  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.386521  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:32.386528  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:32.386588  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:32.415239  303437 cri.go:89] found id: ""
	I1210 07:07:32.415265  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.415274  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:32.415280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:32.415340  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:32.443074  303437 cri.go:89] found id: ""
	I1210 07:07:32.443097  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.443105  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:32.443111  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:32.443170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:32.477593  303437 cri.go:89] found id: ""
	I1210 07:07:32.477620  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.477629  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:32.477636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:32.477693  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:32.502550  303437 cri.go:89] found id: ""
	I1210 07:07:32.502575  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.502584  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:32.502590  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:32.502666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:32.527562  303437 cri.go:89] found id: ""
	I1210 07:07:32.527585  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.527606  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:32.527616  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:32.527632  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:32.588732  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:32.581238    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.581723    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583416    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583864    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.585321    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:32.581238    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.581723    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583416    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583864    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.585321    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:32.588755  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:32.588767  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:32.614322  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:32.614354  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:32.642747  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:32.642777  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:32.697541  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:32.697576  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:35.211281  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:35.221258  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:35.221336  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:35.253168  303437 cri.go:89] found id: ""
	I1210 07:07:35.253193  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.253203  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:35.253210  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:35.253268  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:35.281234  303437 cri.go:89] found id: ""
	I1210 07:07:35.281257  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.281267  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:35.281273  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:35.281333  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:35.310530  303437 cri.go:89] found id: ""
	I1210 07:07:35.310554  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.310563  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:35.310570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:35.310627  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:35.334764  303437 cri.go:89] found id: ""
	I1210 07:07:35.334792  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.334801  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:35.334813  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:35.334870  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:35.361502  303437 cri.go:89] found id: ""
	I1210 07:07:35.361525  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.361534  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:35.361540  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:35.361607  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:35.389058  303437 cri.go:89] found id: ""
	I1210 07:07:35.389080  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.389089  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:35.389095  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:35.389154  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:35.425176  303437 cri.go:89] found id: ""
	I1210 07:07:35.425215  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.425226  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:35.425232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:35.425299  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:35.453052  303437 cri.go:89] found id: ""
	I1210 07:07:35.453079  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.453088  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:35.453097  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:35.453108  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:35.522148  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:35.513319    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.513889    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.516065    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517374    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517853    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:35.513319    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.513889    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.516065    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517374    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517853    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:35.522174  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:35.522186  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:35.547665  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:35.547698  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:35.575564  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:35.575596  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:35.634362  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:35.634400  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:38.149569  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:38.160486  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:38.160568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:38.201222  303437 cri.go:89] found id: ""
	I1210 07:07:38.201245  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.201253  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:38.201260  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:38.201317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:38.237151  303437 cri.go:89] found id: ""
	I1210 07:07:38.237174  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.237183  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:38.237189  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:38.237259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:38.262732  303437 cri.go:89] found id: ""
	I1210 07:07:38.262760  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.262770  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:38.262777  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:38.262835  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:38.293247  303437 cri.go:89] found id: ""
	I1210 07:07:38.293273  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.293283  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:38.293290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:38.293351  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:38.317818  303437 cri.go:89] found id: ""
	I1210 07:07:38.317840  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.317849  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:38.317855  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:38.317911  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:38.342419  303437 cri.go:89] found id: ""
	I1210 07:07:38.342447  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.342465  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:38.342473  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:38.342545  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:38.367206  303437 cri.go:89] found id: ""
	I1210 07:07:38.367271  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.367295  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:38.367316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:38.367408  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:38.395595  303437 cri.go:89] found id: ""
	I1210 07:07:38.395617  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.395626  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:38.395635  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:38.395646  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:38.455465  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:38.455496  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:38.469974  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:38.470052  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:38.534901  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:38.526759    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.527529    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529111    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529677    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.531428    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:38.526759    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.527529    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529111    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529677    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.531428    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:38.534975  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:38.535033  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:38.560101  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:38.560133  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:41.091155  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:41.101359  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:41.101439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:41.124928  303437 cri.go:89] found id: ""
	I1210 07:07:41.124950  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.124958  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:41.124964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:41.125021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:41.150502  303437 cri.go:89] found id: ""
	I1210 07:07:41.150525  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.150534  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:41.150541  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:41.150597  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:41.175254  303437 cri.go:89] found id: ""
	I1210 07:07:41.175280  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.175289  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:41.175295  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:41.175355  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:41.213279  303437 cri.go:89] found id: ""
	I1210 07:07:41.213302  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.213311  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:41.213317  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:41.213376  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:41.241895  303437 cri.go:89] found id: ""
	I1210 07:07:41.241922  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.241931  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:41.241938  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:41.241997  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:41.266233  303437 cri.go:89] found id: ""
	I1210 07:07:41.266259  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.266274  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:41.266280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:41.266375  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:41.295481  303437 cri.go:89] found id: ""
	I1210 07:07:41.295503  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.295512  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:41.295519  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:41.295586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:41.325350  303437 cri.go:89] found id: ""
	I1210 07:07:41.325372  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.325381  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:41.325390  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:41.325402  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:41.381086  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:41.381121  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:41.394364  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:41.394411  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:41.475813  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:41.467819    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.468574    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.470350    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.471004    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.472517    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:41.467819    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.468574    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.470350    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.471004    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.472517    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:41.475836  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:41.475849  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:41.500717  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:41.500751  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:44.031462  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:44.042099  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:44.042173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:44.066643  303437 cri.go:89] found id: ""
	I1210 07:07:44.066674  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.066683  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:44.066689  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:44.066752  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:44.091511  303437 cri.go:89] found id: ""
	I1210 07:07:44.091533  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.091542  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:44.091548  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:44.091627  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:44.116433  303437 cri.go:89] found id: ""
	I1210 07:07:44.116455  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.116464  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:44.116470  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:44.116527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:44.141546  303437 cri.go:89] found id: ""
	I1210 07:07:44.141568  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.141576  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:44.141583  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:44.141659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:44.183580  303437 cri.go:89] found id: ""
	I1210 07:07:44.183602  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.183610  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:44.183616  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:44.183673  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:44.214628  303437 cri.go:89] found id: ""
	I1210 07:07:44.214651  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.214659  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:44.214666  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:44.214738  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:44.241699  303437 cri.go:89] found id: ""
	I1210 07:07:44.241721  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.241729  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:44.241736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:44.241805  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:44.266706  303437 cri.go:89] found id: ""
	I1210 07:07:44.266729  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.266737  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:44.266746  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:44.266758  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:44.321835  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:44.321867  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:44.335089  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:44.335120  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:44.395294  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:44.387779    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.388344    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389371    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389875    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.391491    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:44.387779    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.388344    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389371    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389875    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.391491    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:44.395360  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:44.395388  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:44.425916  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:44.425956  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:46.965660  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:46.976149  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:46.976221  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:47.003597  303437 cri.go:89] found id: ""
	I1210 07:07:47.003620  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.003629  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:47.003636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:47.003709  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:47.028196  303437 cri.go:89] found id: ""
	I1210 07:07:47.028218  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.028226  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:47.028232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:47.028290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:47.056800  303437 cri.go:89] found id: ""
	I1210 07:07:47.056824  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.056833  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:47.056840  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:47.056916  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:47.081593  303437 cri.go:89] found id: ""
	I1210 07:07:47.081656  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.081678  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:47.081697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:47.081767  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:47.110385  303437 cri.go:89] found id: ""
	I1210 07:07:47.110451  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.110474  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:47.110492  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:47.110563  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:47.136398  303437 cri.go:89] found id: ""
	I1210 07:07:47.136465  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.136490  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:47.136503  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:47.136576  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:47.162521  303437 cri.go:89] found id: ""
	I1210 07:07:47.162545  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.162554  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:47.162560  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:47.162617  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:47.200031  303437 cri.go:89] found id: ""
	I1210 07:07:47.200052  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.200060  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:47.200069  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:47.200080  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:47.240172  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:47.240197  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:47.295589  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:47.295625  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:47.308817  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:47.308843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:47.373455  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:47.365473    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.366342    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.367939    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.368531    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.370138    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:47.365473    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.366342    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.367939    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.368531    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.370138    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:47.373479  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:47.373504  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:47.918542  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:07:48.000256  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:48.000468  303437 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:49.243254  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:07:49.300794  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:49.300885  303437 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:49.898427  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:49.908683  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:49.908754  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:49.934109  303437 cri.go:89] found id: ""
	I1210 07:07:49.934136  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.934145  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:49.934152  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:49.934214  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:49.959202  303437 cri.go:89] found id: ""
	I1210 07:07:49.959226  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.959235  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:49.959252  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:49.959329  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:49.983331  303437 cri.go:89] found id: ""
	I1210 07:07:49.983356  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.983364  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:49.983371  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:49.983427  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:50.012230  303437 cri.go:89] found id: ""
	I1210 07:07:50.012265  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.012274  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:50.012281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:50.012350  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:50.039851  303437 cri.go:89] found id: ""
	I1210 07:07:50.039880  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.039889  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:50.039895  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:50.039962  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:50.071162  303437 cri.go:89] found id: ""
	I1210 07:07:50.071186  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.071195  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:50.071201  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:50.071265  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:50.097095  303437 cri.go:89] found id: ""
	I1210 07:07:50.097118  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.097127  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:50.097134  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:50.097198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:50.121941  303437 cri.go:89] found id: ""
	I1210 07:07:50.121966  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.121976  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:50.121985  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:50.121998  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:50.178251  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:50.178286  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:50.195455  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:50.195491  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:50.283052  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:50.274829    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.275404    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277130    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277736    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.279468    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:50.274829    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.275404    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277130    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277736    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.279468    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:50.283077  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:50.283098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:50.309433  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:50.309472  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:52.837493  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:52.848301  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:52.848370  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:52.872661  303437 cri.go:89] found id: ""
	I1210 07:07:52.872682  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.872690  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:52.872696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:52.872755  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:52.895064  303437 cri.go:89] found id: ""
	I1210 07:07:52.895090  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.895100  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:52.895112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:52.895170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:52.918926  303437 cri.go:89] found id: ""
	I1210 07:07:52.918950  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.918958  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:52.918964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:52.919038  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:52.942801  303437 cri.go:89] found id: ""
	I1210 07:07:52.942823  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.942831  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:52.942838  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:52.942895  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:52.968885  303437 cri.go:89] found id: ""
	I1210 07:07:52.968910  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.968919  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:52.968925  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:52.968984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:52.992050  303437 cri.go:89] found id: ""
	I1210 07:07:52.992072  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.992080  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:52.992087  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:52.992145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:53.020481  303437 cri.go:89] found id: ""
	I1210 07:07:53.020507  303437 logs.go:282] 0 containers: []
	W1210 07:07:53.020516  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:53.020523  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:53.020586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:53.045391  303437 cri.go:89] found id: ""
	I1210 07:07:53.045412  303437 logs.go:282] 0 containers: []
	W1210 07:07:53.045421  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:53.045430  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:53.045441  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:53.100408  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:53.100444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:53.115165  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:53.115192  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:53.192011  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:53.181167    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.181972    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.182929    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.185331    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.186084    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:53.181167    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.181972    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.182929    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.185331    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.186084    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:53.192034  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:53.192049  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:53.220495  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:53.220572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:55.749081  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:55.759242  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:55.759314  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:55.782656  303437 cri.go:89] found id: ""
	I1210 07:07:55.782681  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.782690  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:55.782707  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:55.782766  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:55.807483  303437 cri.go:89] found id: ""
	I1210 07:07:55.807509  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.807527  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:55.807534  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:55.807595  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:55.832851  303437 cri.go:89] found id: ""
	I1210 07:07:55.832887  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.832896  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:55.832906  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:55.832966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:55.857553  303437 cri.go:89] found id: ""
	I1210 07:07:55.857575  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.857584  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:55.857591  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:55.857653  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:55.885207  303437 cri.go:89] found id: ""
	I1210 07:07:55.885230  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.885240  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:55.885246  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:55.885315  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:55.909296  303437 cri.go:89] found id: ""
	I1210 07:07:55.909322  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.909332  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:55.909340  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:55.909398  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:55.933701  303437 cri.go:89] found id: ""
	I1210 07:07:55.933723  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.933733  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:55.933740  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:55.933812  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:55.958095  303437 cri.go:89] found id: ""
	I1210 07:07:55.958121  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.958130  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:55.958139  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:55.958150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:56.028949  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:56.021322    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.021920    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023068    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023441    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.025194    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:56.021322    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.021920    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023068    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023441    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.025194    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:56.028976  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:56.029046  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:56.055269  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:56.055308  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:56.087408  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:56.087438  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:56.143537  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:56.143570  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:58.657737  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:58.669685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:58.669751  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:58.704925  303437 cri.go:89] found id: ""
	I1210 07:07:58.704947  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.704955  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:58.704962  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:58.705021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:58.732775  303437 cri.go:89] found id: ""
	I1210 07:07:58.732798  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.732806  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:58.732812  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:58.732871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:58.757863  303437 cri.go:89] found id: ""
	I1210 07:07:58.757885  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.757893  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:58.757899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:58.757957  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:58.782893  303437 cri.go:89] found id: ""
	I1210 07:07:58.782914  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.782923  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:58.782929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:58.782987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:58.813425  303437 cri.go:89] found id: ""
	I1210 07:07:58.813458  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.813467  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:58.813474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:58.813531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:58.837894  303437 cri.go:89] found id: ""
	I1210 07:07:58.837920  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.837930  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:58.837937  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:58.837994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:58.862767  303437 cri.go:89] found id: ""
	I1210 07:07:58.862793  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.862803  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:58.862810  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:58.862871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:58.887161  303437 cri.go:89] found id: ""
	I1210 07:07:58.887190  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.887203  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:58.887213  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:58.887226  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:58.912742  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:58.912774  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:58.941751  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:58.941778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:58.997499  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:58.997538  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:59.012690  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:59.012716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:59.079032  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:59.071332    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.071853    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.073549    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.074116    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.075663    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:59.071332    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.071853    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.073549    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.074116    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.075663    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:59.173255  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:07:59.241772  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:59.241906  303437 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:59.245162  303437 out.go:179] * Enabled addons: 
	I1210 07:07:59.248019  303437 addons.go:530] duration metric: took 1m50.382393488s for enable addons: enabled=[]
	I1210 07:08:01.579277  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:01.590395  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:01.590469  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:01.616988  303437 cri.go:89] found id: ""
	I1210 07:08:01.617017  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.617025  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:01.617032  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:01.617095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:01.643533  303437 cri.go:89] found id: ""
	I1210 07:08:01.643555  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.643563  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:01.643570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:01.643633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:01.683402  303437 cri.go:89] found id: ""
	I1210 07:08:01.683430  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.683439  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:01.683446  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:01.683507  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:01.714420  303437 cri.go:89] found id: ""
	I1210 07:08:01.714448  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.714457  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:01.714463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:01.714522  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:01.741588  303437 cri.go:89] found id: ""
	I1210 07:08:01.741614  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.741625  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:01.741632  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:01.741697  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:01.766133  303437 cri.go:89] found id: ""
	I1210 07:08:01.766163  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.766172  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:01.766178  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:01.766246  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:01.796151  303437 cri.go:89] found id: ""
	I1210 07:08:01.796173  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.796181  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:01.796188  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:01.796253  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:01.821826  303437 cri.go:89] found id: ""
	I1210 07:08:01.821848  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.821857  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:01.821872  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:01.821883  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:01.856135  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:01.856162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:01.912548  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:01.912582  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:01.926252  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:01.926279  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:01.989471  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:01.981372    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.982086    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.983797    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.984477    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.986161    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:01.981372    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.982086    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.983797    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.984477    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.986161    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:01.989491  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:01.989504  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:04.519169  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:04.529774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:04.529853  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:04.557926  303437 cri.go:89] found id: ""
	I1210 07:08:04.557950  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.557967  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:04.557988  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:04.558067  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:04.585171  303437 cri.go:89] found id: ""
	I1210 07:08:04.585195  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.585204  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:04.585223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:04.585292  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:04.613695  303437 cri.go:89] found id: ""
	I1210 07:08:04.613720  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.613729  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:04.613735  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:04.613808  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:04.637775  303437 cri.go:89] found id: ""
	I1210 07:08:04.637859  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.637880  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:04.637899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:04.637989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:04.673966  303437 cri.go:89] found id: ""
	I1210 07:08:04.674033  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.674057  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:04.674073  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:04.674161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:04.706760  303437 cri.go:89] found id: ""
	I1210 07:08:04.706825  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.706846  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:04.706865  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:04.706955  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:04.748640  303437 cri.go:89] found id: ""
	I1210 07:08:04.748707  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.748731  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:04.748749  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:04.748837  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:04.778179  303437 cri.go:89] found id: ""
	I1210 07:08:04.778241  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.778263  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:04.778283  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:04.778324  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:04.838994  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:04.839038  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:04.852663  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:04.852737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:04.919247  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:04.910928    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.911685    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913313    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913950    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.915537    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:04.910928    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.911685    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913313    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913950    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.915537    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:04.919311  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:04.919346  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:04.944409  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:04.944441  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:07.475233  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:07.485817  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:07.485889  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:07.510450  303437 cri.go:89] found id: ""
	I1210 07:08:07.510473  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.510482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:07.510488  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:07.510549  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:07.536516  303437 cri.go:89] found id: ""
	I1210 07:08:07.536541  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.536550  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:07.536556  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:07.536646  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:07.561868  303437 cri.go:89] found id: ""
	I1210 07:08:07.561893  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.561902  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:07.561908  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:07.561987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:07.590197  303437 cri.go:89] found id: ""
	I1210 07:08:07.590221  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.590230  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:07.590236  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:07.590342  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:07.613514  303437 cri.go:89] found id: ""
	I1210 07:08:07.613539  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.613548  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:07.613555  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:07.613662  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:07.638377  303437 cri.go:89] found id: ""
	I1210 07:08:07.638402  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.638410  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:07.638417  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:07.638477  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:07.667985  303437 cri.go:89] found id: ""
	I1210 07:08:07.668058  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.668082  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:07.668102  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:07.668189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:07.698530  303437 cri.go:89] found id: ""
	I1210 07:08:07.698605  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.698647  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:07.698671  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:07.698710  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:07.761708  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:07.761745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:07.775951  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:07.775978  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:07.842158  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:07.833714    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.834583    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836355    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836801    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.838463    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:07.833714    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.834583    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836355    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836801    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.838463    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:07.842183  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:07.842200  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:07.868656  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:07.868693  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:10.398249  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:10.410905  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:10.410974  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:10.441450  303437 cri.go:89] found id: ""
	I1210 07:08:10.441474  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.441482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:10.441489  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:10.441551  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:10.467324  303437 cri.go:89] found id: ""
	I1210 07:08:10.467345  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.467354  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:10.467360  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:10.467422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:10.490980  303437 cri.go:89] found id: ""
	I1210 07:08:10.491001  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.491117  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:10.491125  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:10.491186  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:10.515608  303437 cri.go:89] found id: ""
	I1210 07:08:10.515673  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.515688  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:10.515696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:10.515753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:10.540198  303437 cri.go:89] found id: ""
	I1210 07:08:10.540223  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.540232  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:10.540246  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:10.540304  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:10.565060  303437 cri.go:89] found id: ""
	I1210 07:08:10.565125  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.565140  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:10.565155  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:10.565219  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:10.593396  303437 cri.go:89] found id: ""
	I1210 07:08:10.593430  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.593438  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:10.593445  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:10.593510  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:10.617363  303437 cri.go:89] found id: ""
	I1210 07:08:10.617395  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.617405  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:10.617414  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:10.617426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:10.677240  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:10.677317  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:10.692150  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:10.692220  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:10.758835  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:10.750046    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.750802    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.753608    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.754055    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.755555    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:10.750046    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.750802    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.753608    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.754055    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.755555    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:10.758906  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:10.758934  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:10.783900  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:10.783935  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:13.316158  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:13.326768  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:13.326841  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:13.354375  303437 cri.go:89] found id: ""
	I1210 07:08:13.354402  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.354411  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:13.354417  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:13.354486  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:13.379439  303437 cri.go:89] found id: ""
	I1210 07:08:13.379467  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.379479  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:13.379491  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:13.379572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:13.406403  303437 cri.go:89] found id: ""
	I1210 07:08:13.406425  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.406433  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:13.406439  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:13.406498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:13.441528  303437 cri.go:89] found id: ""
	I1210 07:08:13.441633  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.441665  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:13.441698  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:13.441887  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:13.485367  303437 cri.go:89] found id: ""
	I1210 07:08:13.485407  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.485416  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:13.485423  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:13.485491  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:13.515544  303437 cri.go:89] found id: ""
	I1210 07:08:13.515572  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.515582  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:13.515588  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:13.515646  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:13.541572  303437 cri.go:89] found id: ""
	I1210 07:08:13.541604  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.541613  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:13.541620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:13.541692  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:13.566335  303437 cri.go:89] found id: ""
	I1210 07:08:13.566366  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.566376  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:13.566385  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:13.566396  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:13.622359  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:13.622391  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:13.635632  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:13.635661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:13.716667  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:13.707886    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.708774    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.710652    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.711407    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.713108    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:13.707886    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.708774    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.710652    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.711407    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.713108    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:13.716691  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:13.716711  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:13.743967  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:13.744002  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:16.273094  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:16.283420  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:16.283488  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:16.307336  303437 cri.go:89] found id: ""
	I1210 07:08:16.307358  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.307366  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:16.307373  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:16.307430  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:16.330448  303437 cri.go:89] found id: ""
	I1210 07:08:16.330476  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.330485  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:16.330492  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:16.330552  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:16.362050  303437 cri.go:89] found id: ""
	I1210 07:08:16.362080  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.362089  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:16.362096  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:16.362172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:16.385708  303437 cri.go:89] found id: ""
	I1210 07:08:16.385732  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.385741  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:16.385747  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:16.385852  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:16.421398  303437 cri.go:89] found id: ""
	I1210 07:08:16.421427  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.421436  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:16.421442  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:16.421509  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:16.449046  303437 cri.go:89] found id: ""
	I1210 07:08:16.449074  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.449082  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:16.449089  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:16.449166  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:16.475499  303437 cri.go:89] found id: ""
	I1210 07:08:16.475525  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.475534  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:16.475541  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:16.475619  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:16.502476  303437 cri.go:89] found id: ""
	I1210 07:08:16.502506  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.502515  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:16.502524  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:16.502535  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:16.530854  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:16.530929  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:16.586993  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:16.587030  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:16.600337  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:16.600364  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:16.669775  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:16.660570    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.661341    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.663229    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.664010    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.665709    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:16.660570    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.661341    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.663229    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.664010    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.665709    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:16.669849  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:16.669875  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:19.199141  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:19.209670  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:19.209739  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:19.242748  303437 cri.go:89] found id: ""
	I1210 07:08:19.242775  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.242784  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:19.242791  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:19.242849  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:19.266957  303437 cri.go:89] found id: ""
	I1210 07:08:19.266980  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.266989  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:19.266995  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:19.267066  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:19.293252  303437 cri.go:89] found id: ""
	I1210 07:08:19.293276  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.293285  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:19.293292  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:19.293349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:19.318070  303437 cri.go:89] found id: ""
	I1210 07:08:19.318096  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.318105  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:19.318112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:19.318171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:19.341744  303437 cri.go:89] found id: ""
	I1210 07:08:19.341769  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.341783  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:19.341789  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:19.341847  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:19.366605  303437 cri.go:89] found id: ""
	I1210 07:08:19.366632  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.366641  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:19.366648  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:19.366706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:19.393536  303437 cri.go:89] found id: ""
	I1210 07:08:19.393561  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.393570  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:19.393576  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:19.393633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:19.422513  303437 cri.go:89] found id: ""
	I1210 07:08:19.422535  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.422546  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:19.422556  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:19.422566  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:19.453046  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:19.453118  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:19.488889  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:19.488918  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:19.547224  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:19.547259  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:19.562006  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:19.562035  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:19.625530  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:19.617363    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.618299    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620102    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620530    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.622148    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:19.617363    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.618299    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620102    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620530    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.622148    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:22.125860  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:22.136477  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:22.136550  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:22.164763  303437 cri.go:89] found id: ""
	I1210 07:08:22.164786  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.164795  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:22.164801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:22.164861  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:22.190879  303437 cri.go:89] found id: ""
	I1210 07:08:22.190900  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.190909  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:22.190915  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:22.190973  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:22.215247  303437 cri.go:89] found id: ""
	I1210 07:08:22.215278  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.215286  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:22.215292  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:22.215351  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:22.239059  303437 cri.go:89] found id: ""
	I1210 07:08:22.239086  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.239095  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:22.239102  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:22.239163  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:22.264259  303437 cri.go:89] found id: ""
	I1210 07:08:22.264284  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.264293  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:22.264299  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:22.264357  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:22.289890  303437 cri.go:89] found id: ""
	I1210 07:08:22.289913  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.289923  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:22.289929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:22.289987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:22.317025  303437 cri.go:89] found id: ""
	I1210 07:08:22.317051  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.317060  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:22.317067  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:22.317124  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:22.341933  303437 cri.go:89] found id: ""
	I1210 07:08:22.341965  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.341974  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:22.341992  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:22.342004  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:22.398310  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:22.398344  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:22.413479  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:22.413520  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:22.490851  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:22.482047    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.482772    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.484530    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.485118    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.486931    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:22.482047    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.482772    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.484530    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.485118    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.486931    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:22.490873  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:22.490888  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:22.518860  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:22.518891  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:25.049142  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:25.060069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:25.060142  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:25.089203  303437 cri.go:89] found id: ""
	I1210 07:08:25.089232  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.089242  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:25.089248  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:25.089317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:25.118751  303437 cri.go:89] found id: ""
	I1210 07:08:25.118776  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.118785  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:25.118791  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:25.118848  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:25.143129  303437 cri.go:89] found id: ""
	I1210 07:08:25.143163  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.143173  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:25.143179  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:25.143240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:25.169805  303437 cri.go:89] found id: ""
	I1210 07:08:25.169830  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.169839  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:25.169846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:25.169905  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:25.194716  303437 cri.go:89] found id: ""
	I1210 07:08:25.194743  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.194752  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:25.194759  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:25.194818  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:25.221104  303437 cri.go:89] found id: ""
	I1210 07:08:25.221127  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.221135  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:25.221141  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:25.221199  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:25.249738  303437 cri.go:89] found id: ""
	I1210 07:08:25.249762  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.249771  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:25.249784  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:25.249842  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:25.273527  303437 cri.go:89] found id: ""
	I1210 07:08:25.273552  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.273562  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:25.273572  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:25.273583  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:25.298962  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:25.298996  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:25.326742  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:25.326770  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:25.381274  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:25.381307  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:25.394260  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:25.394289  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:25.485635  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:25.478200    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.479077    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.480640    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.481256    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.482290    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:25.478200    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.479077    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.480640    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.481256    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.482290    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:27.987151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:28.000081  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:28.000164  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:28.025871  303437 cri.go:89] found id: ""
	I1210 07:08:28.025896  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.025904  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:28.025917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:28.025978  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:28.050799  303437 cri.go:89] found id: ""
	I1210 07:08:28.050822  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.050831  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:28.050837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:28.050902  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:28.075890  303437 cri.go:89] found id: ""
	I1210 07:08:28.075912  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.075921  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:28.075928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:28.075988  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:28.100461  303437 cri.go:89] found id: ""
	I1210 07:08:28.100483  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.100492  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:28.100499  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:28.100555  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:28.126583  303437 cri.go:89] found id: ""
	I1210 07:08:28.126607  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.126617  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:28.126623  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:28.126682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:28.156736  303437 cri.go:89] found id: ""
	I1210 07:08:28.156758  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.156767  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:28.156774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:28.156837  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:28.181562  303437 cri.go:89] found id: ""
	I1210 07:08:28.181635  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.181657  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:28.181675  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:28.181760  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:28.206007  303437 cri.go:89] found id: ""
	I1210 07:08:28.206081  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.206106  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:28.206127  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:28.206163  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:28.219409  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:28.219445  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:28.285367  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:28.277994    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.278613    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280604    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.282182    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:28.277994    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.278613    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280604    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.282182    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:28.285387  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:28.285399  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:28.310115  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:28.310150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:28.337400  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:28.337427  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:30.895800  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:30.906215  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:30.906285  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:30.940989  303437 cri.go:89] found id: ""
	I1210 07:08:30.941016  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.941025  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:30.941031  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:30.941089  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:30.968174  303437 cri.go:89] found id: ""
	I1210 07:08:30.968196  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.968205  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:30.968211  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:30.968267  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:30.997147  303437 cri.go:89] found id: ""
	I1210 07:08:30.997181  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.997191  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:30.997198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:30.997324  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:31.027985  303437 cri.go:89] found id: ""
	I1210 07:08:31.028024  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.028033  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:31.028039  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:31.028101  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:31.052662  303437 cri.go:89] found id: ""
	I1210 07:08:31.052684  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.052693  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:31.052699  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:31.052760  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:31.078026  303437 cri.go:89] found id: ""
	I1210 07:08:31.078051  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.078060  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:31.078067  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:31.078129  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:31.106108  303437 cri.go:89] found id: ""
	I1210 07:08:31.106135  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.106144  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:31.106150  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:31.106212  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:31.133109  303437 cri.go:89] found id: ""
	I1210 07:08:31.133133  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.133141  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:31.133150  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:31.133162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:31.158330  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:31.158369  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:31.190546  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:31.190570  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:31.245193  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:31.245228  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:31.258848  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:31.258882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:31.332332  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:31.324865    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.325506    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327103    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327745    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.329226    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:31.324865    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.325506    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327103    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327745    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.329226    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:33.832563  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:33.843389  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:33.843462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:33.868588  303437 cri.go:89] found id: ""
	I1210 07:08:33.868612  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.868621  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:33.868627  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:33.868691  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:33.893467  303437 cri.go:89] found id: ""
	I1210 07:08:33.893492  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.893501  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:33.893507  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:33.893568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:33.925853  303437 cri.go:89] found id: ""
	I1210 07:08:33.925883  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.925892  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:33.925899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:33.925961  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:33.957483  303437 cri.go:89] found id: ""
	I1210 07:08:33.957507  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.957516  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:33.957523  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:33.957582  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:33.990903  303437 cri.go:89] found id: ""
	I1210 07:08:33.990927  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.990937  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:33.990943  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:33.991005  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:34.017222  303437 cri.go:89] found id: ""
	I1210 07:08:34.017249  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.017258  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:34.017264  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:34.017346  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:34.043888  303437 cri.go:89] found id: ""
	I1210 07:08:34.043913  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.043921  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:34.043928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:34.044001  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:34.069229  303437 cri.go:89] found id: ""
	I1210 07:08:34.069299  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.069314  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:34.069325  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:34.069337  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:34.127059  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:34.127093  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:34.140507  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:34.140537  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:34.205618  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:34.198206    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.198939    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200406    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200898    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.202347    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:34.198206    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.198939    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200406    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200898    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.202347    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:34.205639  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:34.205651  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:34.230228  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:34.230258  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:36.756574  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:36.768692  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:36.768761  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:36.791900  303437 cri.go:89] found id: ""
	I1210 07:08:36.791922  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.791930  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:36.791936  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:36.791994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:36.818662  303437 cri.go:89] found id: ""
	I1210 07:08:36.818683  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.818691  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:36.818697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:36.818753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:36.846695  303437 cri.go:89] found id: ""
	I1210 07:08:36.846718  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.846727  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:36.846733  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:36.846794  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:36.870384  303437 cri.go:89] found id: ""
	I1210 07:08:36.870408  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.870417  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:36.870423  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:36.870486  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:36.895312  303437 cri.go:89] found id: ""
	I1210 07:08:36.895335  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.895343  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:36.895349  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:36.895408  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:36.926574  303437 cri.go:89] found id: ""
	I1210 07:08:36.926602  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.926611  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:36.926617  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:36.926684  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:36.956760  303437 cri.go:89] found id: ""
	I1210 07:08:36.956786  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.956795  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:36.956801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:36.956864  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:36.983460  303437 cri.go:89] found id: ""
	I1210 07:08:36.983480  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.983488  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:36.983497  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:36.983512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:37.039889  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:37.039926  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:37.053431  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:37.053508  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:37.117639  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:37.109822    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.110762    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.111850    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.112355    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.113887    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:37.109822    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.110762    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.111850    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.112355    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.113887    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:37.117660  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:37.117673  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:37.148315  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:37.148357  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:39.681355  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:39.695207  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:39.695290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:39.725514  303437 cri.go:89] found id: ""
	I1210 07:08:39.725547  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.725556  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:39.725563  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:39.725632  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:39.750801  303437 cri.go:89] found id: ""
	I1210 07:08:39.750834  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.750844  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:39.750850  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:39.750920  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:39.775756  303437 cri.go:89] found id: ""
	I1210 07:08:39.775779  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.775788  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:39.775794  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:39.775853  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:39.805059  303437 cri.go:89] found id: ""
	I1210 07:08:39.805085  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.805094  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:39.805100  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:39.805158  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:39.829219  303437 cri.go:89] found id: ""
	I1210 07:08:39.829284  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.829301  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:39.829309  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:39.829371  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:39.858144  303437 cri.go:89] found id: ""
	I1210 07:08:39.858168  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.858177  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:39.858184  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:39.858243  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:39.886805  303437 cri.go:89] found id: ""
	I1210 07:08:39.886838  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.886846  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:39.886853  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:39.886919  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:39.918064  303437 cri.go:89] found id: ""
	I1210 07:08:39.918089  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.918099  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:39.918108  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:39.918119  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:39.982343  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:39.982418  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:39.995829  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:39.995854  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:40.078976  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:40.070049    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.070741    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.072500    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.073281    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.074971    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:40.070049    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.070741    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.072500    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.073281    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.074971    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:40.079001  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:40.079033  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:40.105734  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:40.105778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:42.635583  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:42.646316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:42.646387  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:42.687725  303437 cri.go:89] found id: ""
	I1210 07:08:42.687746  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.687755  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:42.687761  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:42.687821  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:42.731127  303437 cri.go:89] found id: ""
	I1210 07:08:42.731148  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.731157  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:42.731163  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:42.731224  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:42.761187  303437 cri.go:89] found id: ""
	I1210 07:08:42.761218  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.761227  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:42.761232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:42.761293  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:42.789156  303437 cri.go:89] found id: ""
	I1210 07:08:42.789184  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.789193  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:42.789200  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:42.789259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:42.813508  303437 cri.go:89] found id: ""
	I1210 07:08:42.813533  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.813542  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:42.813548  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:42.813607  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:42.838567  303437 cri.go:89] found id: ""
	I1210 07:08:42.838591  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.838601  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:42.838608  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:42.838667  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:42.862315  303437 cri.go:89] found id: ""
	I1210 07:08:42.862340  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.862348  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:42.862355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:42.862415  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:42.888411  303437 cri.go:89] found id: ""
	I1210 07:08:42.888486  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.888502  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:42.888513  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:42.888526  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:42.950009  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:42.950042  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:42.965591  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:42.965617  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:43.040631  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:43.032737    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.033256    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035076    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035768    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.037307    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:43.032737    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.033256    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035076    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035768    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.037307    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:43.040653  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:43.040667  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:43.067163  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:43.067197  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:45.596845  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:45.607484  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:45.607551  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:45.631812  303437 cri.go:89] found id: ""
	I1210 07:08:45.631841  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.631851  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:45.631857  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:45.631916  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:45.656686  303437 cri.go:89] found id: ""
	I1210 07:08:45.656709  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.656717  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:45.656724  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:45.656782  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:45.705244  303437 cri.go:89] found id: ""
	I1210 07:08:45.705270  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.705279  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:45.705286  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:45.705349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:45.733649  303437 cri.go:89] found id: ""
	I1210 07:08:45.733671  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.733679  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:45.733685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:45.733748  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:45.758319  303437 cri.go:89] found id: ""
	I1210 07:08:45.758340  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.758349  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:45.758355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:45.758416  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:45.782339  303437 cri.go:89] found id: ""
	I1210 07:08:45.782360  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.782369  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:45.782375  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:45.782434  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:45.806598  303437 cri.go:89] found id: ""
	I1210 07:08:45.806624  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.806633  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:45.806640  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:45.806700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:45.830909  303437 cri.go:89] found id: ""
	I1210 07:08:45.830933  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.830942  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:45.830951  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:45.830962  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:45.859118  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:45.859148  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:45.920835  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:45.920869  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:45.935529  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:45.935555  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:46.015051  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:46.007172    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.007866    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.009596    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.010127    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.011638    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:46.007172    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.007866    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.009596    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.010127    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.011638    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:46.015073  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:46.015086  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:48.541223  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:48.551805  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:48.551874  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:48.576818  303437 cri.go:89] found id: ""
	I1210 07:08:48.576878  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.576891  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:48.576898  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:48.576963  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:48.601980  303437 cri.go:89] found id: ""
	I1210 07:08:48.602005  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.602014  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:48.602020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:48.602082  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:48.634301  303437 cri.go:89] found id: ""
	I1210 07:08:48.634324  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.634333  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:48.634339  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:48.634399  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:48.665296  303437 cri.go:89] found id: ""
	I1210 07:08:48.665321  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.665330  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:48.665336  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:48.665395  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:48.696396  303437 cri.go:89] found id: ""
	I1210 07:08:48.696421  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.696430  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:48.696437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:48.696500  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:48.732263  303437 cri.go:89] found id: ""
	I1210 07:08:48.732288  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.732297  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:48.732304  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:48.732365  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:48.759127  303437 cri.go:89] found id: ""
	I1210 07:08:48.759152  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.759161  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:48.759170  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:48.759229  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:48.783999  303437 cri.go:89] found id: ""
	I1210 07:08:48.784077  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.784100  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:48.784116  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:48.784141  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:48.797102  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:48.797132  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:48.859523  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:48.852279    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.852826    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854371    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854816    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.856244    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:48.852279    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.852826    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854371    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854816    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.856244    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:48.859546  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:48.859560  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:48.884680  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:48.884714  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:48.923070  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:48.923098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:51.485606  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:51.496059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:51.496133  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:51.521404  303437 cri.go:89] found id: ""
	I1210 07:08:51.521429  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.521438  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:51.521444  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:51.521504  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:51.546743  303437 cri.go:89] found id: ""
	I1210 07:08:51.546768  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.546777  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:51.546785  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:51.546847  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:51.577064  303437 cri.go:89] found id: ""
	I1210 07:08:51.577089  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.577099  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:51.577105  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:51.577171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:51.602384  303437 cri.go:89] found id: ""
	I1210 07:08:51.602410  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.602420  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:51.602426  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:51.602484  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:51.630338  303437 cri.go:89] found id: ""
	I1210 07:08:51.630367  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.630375  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:51.630382  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:51.630440  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:51.660663  303437 cri.go:89] found id: ""
	I1210 07:08:51.660691  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.660700  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:51.660706  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:51.660765  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:51.689142  303437 cri.go:89] found id: ""
	I1210 07:08:51.689170  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.689179  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:51.689186  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:51.689246  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:51.723765  303437 cri.go:89] found id: ""
	I1210 07:08:51.723792  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.723800  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:51.723810  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:51.723824  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:51.781842  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:51.781873  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:51.795845  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:51.795872  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:51.863519  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:51.855577    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.856333    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858048    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858719    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.860050    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:51.855577    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.856333    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858048    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858719    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.860050    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:51.863583  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:51.863611  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:51.888478  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:51.888510  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:54.421755  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:54.432308  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:54.432377  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:54.458171  303437 cri.go:89] found id: ""
	I1210 07:08:54.458194  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.458209  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:54.458216  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:54.458279  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:54.485658  303437 cri.go:89] found id: ""
	I1210 07:08:54.485689  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.485698  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:54.485704  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:54.485763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:54.514257  303437 cri.go:89] found id: ""
	I1210 07:08:54.514279  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.514287  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:54.514294  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:54.514360  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:54.538966  303437 cri.go:89] found id: ""
	I1210 07:08:54.539053  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.539078  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:54.539096  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:54.539182  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:54.563486  303437 cri.go:89] found id: ""
	I1210 07:08:54.563512  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.563521  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:54.563528  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:54.563588  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:54.588780  303437 cri.go:89] found id: ""
	I1210 07:08:54.588805  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.588814  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:54.588827  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:54.588886  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:54.618322  303437 cri.go:89] found id: ""
	I1210 07:08:54.618346  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.618356  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:54.618362  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:54.618421  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:54.643564  303437 cri.go:89] found id: ""
	I1210 07:08:54.643592  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.643602  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:54.643612  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:54.643624  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:54.683994  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:54.684069  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:54.743900  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:54.743934  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:54.757240  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:54.757266  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:54.820795  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:54.813522    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.813935    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.815612    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.816020    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.817550    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:54.813522    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.813935    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.815612    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.816020    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.817550    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:54.820815  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:54.820830  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:57.345608  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:57.358499  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:57.358625  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:57.384563  303437 cri.go:89] found id: ""
	I1210 07:08:57.384589  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.384598  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:57.384604  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:57.384682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:57.408236  303437 cri.go:89] found id: ""
	I1210 07:08:57.408263  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.408272  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:57.408279  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:57.408337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:57.432014  303437 cri.go:89] found id: ""
	I1210 07:08:57.432037  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.432045  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:57.432052  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:57.432111  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:57.455970  303437 cri.go:89] found id: ""
	I1210 07:08:57.456046  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.456068  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:57.456088  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:57.456173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:57.480680  303437 cri.go:89] found id: ""
	I1210 07:08:57.480752  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.480767  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:57.480775  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:57.480841  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:57.505993  303437 cri.go:89] found id: ""
	I1210 07:08:57.506026  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.506037  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:57.506043  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:57.506153  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:57.530713  303437 cri.go:89] found id: ""
	I1210 07:08:57.530739  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.530748  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:57.530754  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:57.530814  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:57.555806  303437 cri.go:89] found id: ""
	I1210 07:08:57.555871  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.555897  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:57.555918  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:57.555943  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:57.611292  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:57.611326  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:57.624707  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:57.624735  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:57.707745  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:57.699963    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.701079    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702632    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702942    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.704373    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:57.699963    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.701079    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702632    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702942    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.704373    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:57.707768  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:57.707780  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:57.734701  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:57.734734  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:00.266582  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:00.305476  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:00.305924  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:00.366724  303437 cri.go:89] found id: ""
	I1210 07:09:00.366806  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.366839  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:00.366879  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:00.366992  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:00.396827  303437 cri.go:89] found id: ""
	I1210 07:09:00.396912  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.396939  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:00.396960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:00.397064  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:00.424504  303437 cri.go:89] found id: ""
	I1210 07:09:00.424531  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.424540  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:00.424547  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:00.424609  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:00.453893  303437 cri.go:89] found id: ""
	I1210 07:09:00.453921  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.453931  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:00.453938  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:00.454001  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:00.480406  303437 cri.go:89] found id: ""
	I1210 07:09:00.480432  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.480441  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:00.480448  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:00.480508  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:00.505747  303437 cri.go:89] found id: ""
	I1210 07:09:00.505779  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.505788  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:00.505795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:00.505856  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:00.535288  303437 cri.go:89] found id: ""
	I1210 07:09:00.535311  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.535320  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:00.535326  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:00.535387  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:00.565945  303437 cri.go:89] found id: ""
	I1210 07:09:00.565972  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.565989  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:00.566015  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:00.566034  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:00.596202  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:00.596228  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:00.651714  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:00.651748  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:00.666338  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:00.666375  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:00.745706  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:00.737632    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.738139    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.739647    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.740156    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.741940    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:00.737632    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.738139    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.739647    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.740156    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.741940    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:00.745728  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:00.745742  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:03.272316  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:03.283628  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:03.283695  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:03.309180  303437 cri.go:89] found id: ""
	I1210 07:09:03.309263  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.309285  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:03.309300  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:03.309373  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:03.334971  303437 cri.go:89] found id: ""
	I1210 07:09:03.334994  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.335003  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:03.335035  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:03.335096  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:03.361090  303437 cri.go:89] found id: ""
	I1210 07:09:03.361116  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.361125  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:03.361131  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:03.361189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:03.385067  303437 cri.go:89] found id: ""
	I1210 07:09:03.385141  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.385161  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:03.385169  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:03.385259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:03.420428  303437 cri.go:89] found id: ""
	I1210 07:09:03.420450  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.420459  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:03.420465  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:03.420527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:03.453131  303437 cri.go:89] found id: ""
	I1210 07:09:03.453153  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.453162  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:03.453168  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:03.453281  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:03.485206  303437 cri.go:89] found id: ""
	I1210 07:09:03.485236  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.485245  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:03.485251  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:03.485311  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:03.517204  303437 cri.go:89] found id: ""
	I1210 07:09:03.517229  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.517238  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:03.517253  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:03.517265  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:03.530656  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:03.530728  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:03.596244  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:03.588660    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.589167    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.590688    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.591215    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.592799    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:03.588660    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.589167    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.590688    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.591215    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.592799    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:03.596305  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:03.596342  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:03.621847  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:03.621882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:03.649988  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:03.650024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:06.209516  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:06.219893  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:06.219970  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:06.244763  303437 cri.go:89] found id: ""
	I1210 07:09:06.244786  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.244795  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:06.244801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:06.244862  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:06.271479  303437 cri.go:89] found id: ""
	I1210 07:09:06.271501  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.271509  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:06.271515  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:06.271572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:06.295607  303437 cri.go:89] found id: ""
	I1210 07:09:06.295635  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.295644  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:06.295651  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:06.295706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:06.320774  303437 cri.go:89] found id: ""
	I1210 07:09:06.320798  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.320806  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:06.320823  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:06.320886  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:06.349033  303437 cri.go:89] found id: ""
	I1210 07:09:06.349056  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.349064  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:06.349070  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:06.349127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:06.377330  303437 cri.go:89] found id: ""
	I1210 07:09:06.377352  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.377361  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:06.377367  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:06.377426  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:06.400983  303437 cri.go:89] found id: ""
	I1210 07:09:06.401005  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.401014  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:06.401021  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:06.401080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:06.431299  303437 cri.go:89] found id: ""
	I1210 07:09:06.431327  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.431336  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:06.431345  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:06.431356  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:06.462335  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:06.462369  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:06.495348  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:06.495376  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:06.551592  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:06.551627  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:06.565270  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:06.565305  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:06.629933  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:06.621965    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.622716    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.624429    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.625124    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.626708    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:06.621965    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.622716    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.624429    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.625124    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.626708    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:09.131098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:09.141585  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:09.141658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:09.168859  303437 cri.go:89] found id: ""
	I1210 07:09:09.168889  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.168898  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:09.168904  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:09.168966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:09.193427  303437 cri.go:89] found id: ""
	I1210 07:09:09.193448  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.193457  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:09.193463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:09.193520  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:09.217804  303437 cri.go:89] found id: ""
	I1210 07:09:09.217928  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.217954  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:09.217975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:09.218083  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:09.242204  303437 cri.go:89] found id: ""
	I1210 07:09:09.242277  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.242303  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:09.242322  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:09.242404  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:09.268889  303437 cri.go:89] found id: ""
	I1210 07:09:09.268912  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.268920  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:09.268926  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:09.268984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:09.293441  303437 cri.go:89] found id: ""
	I1210 07:09:09.293514  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.293545  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:09.293563  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:09.293671  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:09.321925  303437 cri.go:89] found id: ""
	I1210 07:09:09.321946  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.321954  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:09.321960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:09.322026  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:09.350603  303437 cri.go:89] found id: ""
	I1210 07:09:09.350623  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.350631  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:09.350641  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:09.350653  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:09.363382  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:09.363409  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:09.429669  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:09.421586    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.422246    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424200    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424743    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.426494    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:09.421586    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.422246    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424200    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424743    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.426494    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:09.429690  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:09.429702  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:09.461410  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:09.461444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:09.500508  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:09.500536  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:12.055555  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:12.066220  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:12.066289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:12.093446  303437 cri.go:89] found id: ""
	I1210 07:09:12.093468  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.093477  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:12.093484  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:12.093543  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:12.119338  303437 cri.go:89] found id: ""
	I1210 07:09:12.119361  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.119370  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:12.119376  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:12.119436  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:12.146532  303437 cri.go:89] found id: ""
	I1210 07:09:12.146553  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.146562  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:12.146568  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:12.146623  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:12.175977  303437 cri.go:89] found id: ""
	I1210 07:09:12.175999  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.176007  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:12.176013  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:12.176072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:12.200557  303437 cri.go:89] found id: ""
	I1210 07:09:12.200579  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.200588  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:12.200595  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:12.200651  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:12.224652  303437 cri.go:89] found id: ""
	I1210 07:09:12.224674  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.224684  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:12.224690  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:12.224750  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:12.249147  303437 cri.go:89] found id: ""
	I1210 07:09:12.249171  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.249180  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:12.249187  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:12.249253  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:12.272500  303437 cri.go:89] found id: ""
	I1210 07:09:12.272535  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.272543  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:12.272553  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:12.272580  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:12.328368  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:12.328399  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:12.341669  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:12.341699  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:12.401653  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:12.394790    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.395266    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396400    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396898    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.398538    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:12.394790    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.395266    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396400    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396898    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.398538    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:12.401708  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:12.401734  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:12.431751  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:12.431791  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:14.963924  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:14.974138  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:14.974206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:15.001054  303437 cri.go:89] found id: ""
	I1210 07:09:15.001080  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.001089  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:15.001097  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:15.001170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:15.040020  303437 cri.go:89] found id: ""
	I1210 07:09:15.040044  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.040053  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:15.040059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:15.040121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:15.065063  303437 cri.go:89] found id: ""
	I1210 07:09:15.065086  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.065095  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:15.065101  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:15.065161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:15.089689  303437 cri.go:89] found id: ""
	I1210 07:09:15.089714  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.089723  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:15.089729  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:15.089797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:15.117422  303437 cri.go:89] found id: ""
	I1210 07:09:15.117446  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.117455  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:15.117462  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:15.117521  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:15.143475  303437 cri.go:89] found id: ""
	I1210 07:09:15.143498  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.143507  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:15.143514  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:15.143580  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:15.168329  303437 cri.go:89] found id: ""
	I1210 07:09:15.168353  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.168363  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:15.168370  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:15.168439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:15.196848  303437 cri.go:89] found id: ""
	I1210 07:09:15.196870  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.196879  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:15.196889  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:15.196901  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:15.210071  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:15.210098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:15.270835  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:15.262938    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.263645    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265180    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265486    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.267063    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:15.262938    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.263645    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265180    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265486    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.267063    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:15.270858  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:15.270870  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:15.296738  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:15.296774  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:15.322760  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:15.322786  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:17.877564  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:17.887770  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:17.887840  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:17.923653  303437 cri.go:89] found id: ""
	I1210 07:09:17.923691  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.923701  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:17.923708  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:17.923789  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:17.953013  303437 cri.go:89] found id: ""
	I1210 07:09:17.953058  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.953067  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:17.953073  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:17.953155  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:17.987520  303437 cri.go:89] found id: ""
	I1210 07:09:17.987565  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.987574  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:17.987587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:17.987655  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:18.017344  303437 cri.go:89] found id: ""
	I1210 07:09:18.017367  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.017378  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:18.017385  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:18.017448  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:18.043560  303437 cri.go:89] found id: ""
	I1210 07:09:18.043592  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.043602  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:18.043609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:18.043670  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:18.071253  303437 cri.go:89] found id: ""
	I1210 07:09:18.071299  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.071308  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:18.071317  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:18.071395  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:18.100328  303437 cri.go:89] found id: ""
	I1210 07:09:18.100350  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.100359  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:18.100364  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:18.100422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:18.124828  303437 cri.go:89] found id: ""
	I1210 07:09:18.124855  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.124864  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:18.124873  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:18.124906  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:18.180441  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:18.180473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:18.193811  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:18.193838  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:18.254675  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:18.247379    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.248083    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.249676    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.250042    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.251523    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:18.247379    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.248083    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.249676    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.250042    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.251523    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:18.254700  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:18.254720  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:18.280133  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:18.280167  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:20.813863  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:20.824103  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:20.824175  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:20.847793  303437 cri.go:89] found id: ""
	I1210 07:09:20.847818  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.847827  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:20.847833  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:20.847896  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:20.873295  303437 cri.go:89] found id: ""
	I1210 07:09:20.873319  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.873328  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:20.873334  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:20.873394  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:20.897570  303437 cri.go:89] found id: ""
	I1210 07:09:20.897594  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.897603  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:20.897609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:20.897665  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:20.932999  303437 cri.go:89] found id: ""
	I1210 07:09:20.933025  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.933034  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:20.933041  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:20.933099  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:20.967096  303437 cri.go:89] found id: ""
	I1210 07:09:20.967123  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.967137  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:20.967143  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:20.967203  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:20.994239  303437 cri.go:89] found id: ""
	I1210 07:09:20.994265  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.994274  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:20.994281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:20.994337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:21.020205  303437 cri.go:89] found id: ""
	I1210 07:09:21.020230  303437 logs.go:282] 0 containers: []
	W1210 07:09:21.020238  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:21.020245  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:21.020305  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:21.049401  303437 cri.go:89] found id: ""
	I1210 07:09:21.049427  303437 logs.go:282] 0 containers: []
	W1210 07:09:21.049436  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:21.049445  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:21.049457  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:21.062901  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:21.062926  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:21.122517  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:21.115640    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.116120    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117591    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117985    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.119485    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:21.115640    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.116120    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117591    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117985    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.119485    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:21.122537  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:21.122550  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:21.147196  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:21.147230  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:21.177192  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:21.177221  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:23.732133  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:23.742890  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:23.742961  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:23.774220  303437 cri.go:89] found id: ""
	I1210 07:09:23.774243  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.774251  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:23.774257  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:23.774317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:23.798816  303437 cri.go:89] found id: ""
	I1210 07:09:23.798837  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.798846  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:23.798852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:23.798910  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:23.823244  303437 cri.go:89] found id: ""
	I1210 07:09:23.823318  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.823341  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:23.823362  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:23.823453  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:23.851474  303437 cri.go:89] found id: ""
	I1210 07:09:23.851500  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.851510  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:23.851516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:23.851598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:23.876565  303437 cri.go:89] found id: ""
	I1210 07:09:23.876641  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.876665  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:23.876679  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:23.876753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:23.901598  303437 cri.go:89] found id: ""
	I1210 07:09:23.901624  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.901632  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:23.901641  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:23.901698  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:23.939880  303437 cri.go:89] found id: ""
	I1210 07:09:23.945774  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.945837  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:23.945917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:23.946105  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:23.983936  303437 cri.go:89] found id: ""
	I1210 07:09:23.984019  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.984045  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:23.984096  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:23.984128  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:24.047417  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:24.047454  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:24.060782  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:24.060808  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:24.123547  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:24.115234    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.115940    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.117664    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.118067    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.119622    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:24.115234    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.115940    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.117664    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.118067    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.119622    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:24.123570  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:24.123583  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:24.148767  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:24.148802  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:26.679138  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:26.691239  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:26.691311  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:26.720725  303437 cri.go:89] found id: ""
	I1210 07:09:26.720748  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.720756  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:26.720763  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:26.720824  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:26.745903  303437 cri.go:89] found id: ""
	I1210 07:09:26.745926  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.745935  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:26.745941  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:26.745999  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:26.771250  303437 cri.go:89] found id: ""
	I1210 07:09:26.771279  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.771289  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:26.771295  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:26.771354  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:26.795771  303437 cri.go:89] found id: ""
	I1210 07:09:26.795795  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.795804  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:26.795810  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:26.795912  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:26.820992  303437 cri.go:89] found id: ""
	I1210 07:09:26.821013  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.821023  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:26.821029  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:26.821091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:26.849537  303437 cri.go:89] found id: ""
	I1210 07:09:26.849559  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.849568  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:26.849575  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:26.849631  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:26.882245  303437 cri.go:89] found id: ""
	I1210 07:09:26.882274  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.882284  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:26.882290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:26.882354  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:26.907397  303437 cri.go:89] found id: ""
	I1210 07:09:26.907421  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.907437  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:26.907446  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:26.907457  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:26.945593  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:26.945619  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:27.009478  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:27.009515  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:27.023242  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:27.023268  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:27.088362  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:27.080378    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.081206    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.082792    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.083225    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.084935    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:27.080378    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.081206    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.082792    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.083225    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.084935    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:27.088384  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:27.088396  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:29.614457  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:29.624717  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:29.624839  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:29.648905  303437 cri.go:89] found id: ""
	I1210 07:09:29.648929  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.648938  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:29.648944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:29.649031  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:29.693513  303437 cri.go:89] found id: ""
	I1210 07:09:29.693576  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.693597  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:29.693615  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:29.693703  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:29.718997  303437 cri.go:89] found id: ""
	I1210 07:09:29.719090  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.719114  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:29.719132  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:29.719215  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:29.749199  303437 cri.go:89] found id: ""
	I1210 07:09:29.749266  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.749289  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:29.749307  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:29.749402  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:29.774719  303437 cri.go:89] found id: ""
	I1210 07:09:29.774795  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.774819  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:29.774841  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:29.774931  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:29.799913  303437 cri.go:89] found id: ""
	I1210 07:09:29.799977  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.799999  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:29.800017  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:29.800095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:29.823673  303437 cri.go:89] found id: ""
	I1210 07:09:29.823747  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.823769  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:29.823787  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:29.823859  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:29.848157  303437 cri.go:89] found id: ""
	I1210 07:09:29.848188  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.848198  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:29.848208  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:29.848219  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:29.876009  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:29.876037  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:29.932276  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:29.932307  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:29.949872  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:29.949898  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:30.045838  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:30.034181    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.034859    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037026    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037737    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.040198    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:30.034181    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.034859    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037026    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037737    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.040198    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:30.045873  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:30.045888  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:32.576040  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:32.587217  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:32.587298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:32.613690  303437 cri.go:89] found id: ""
	I1210 07:09:32.613713  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.613722  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:32.613729  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:32.613797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:32.639153  303437 cri.go:89] found id: ""
	I1210 07:09:32.639178  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.639187  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:32.639193  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:32.639256  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:32.673727  303437 cri.go:89] found id: ""
	I1210 07:09:32.673799  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.673808  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:32.673815  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:32.673882  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:32.709195  303437 cri.go:89] found id: ""
	I1210 07:09:32.709222  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.709231  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:32.709238  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:32.709298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:32.737425  303437 cri.go:89] found id: ""
	I1210 07:09:32.737458  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.737467  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:32.737474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:32.737532  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:32.766042  303437 cri.go:89] found id: ""
	I1210 07:09:32.766069  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.766078  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:32.766086  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:32.766145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:32.791060  303437 cri.go:89] found id: ""
	I1210 07:09:32.791089  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.791098  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:32.791104  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:32.791164  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:32.815424  303437 cri.go:89] found id: ""
	I1210 07:09:32.815445  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.815453  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:32.815462  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:32.815473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:32.845676  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:32.845718  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:32.877898  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:32.877927  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:32.934870  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:32.934903  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:32.950436  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:32.950516  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:33.023900  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:33.014878    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.015644    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.017336    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.018088    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.019810    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:33.014878    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.015644    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.017336    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.018088    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.019810    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:35.524178  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:35.535098  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:35.535173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:35.563582  303437 cri.go:89] found id: ""
	I1210 07:09:35.563606  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.563614  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:35.563621  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:35.563682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:35.589346  303437 cri.go:89] found id: ""
	I1210 07:09:35.589368  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.589377  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:35.589384  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:35.589442  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:35.613807  303437 cri.go:89] found id: ""
	I1210 07:09:35.613833  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.613841  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:35.613848  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:35.613907  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:35.643139  303437 cri.go:89] found id: ""
	I1210 07:09:35.643162  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.643172  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:35.643178  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:35.643240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:35.682597  303437 cri.go:89] found id: ""
	I1210 07:09:35.682629  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.682638  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:35.682645  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:35.682711  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:35.716718  303437 cri.go:89] found id: ""
	I1210 07:09:35.716739  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.716747  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:35.716753  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:35.716811  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:35.746357  303437 cri.go:89] found id: ""
	I1210 07:09:35.746378  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.746387  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:35.746393  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:35.746455  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:35.773219  303437 cri.go:89] found id: ""
	I1210 07:09:35.773240  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.773251  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:35.773260  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:35.773273  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:35.838850  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:35.830993    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.831530    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833193    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833865    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.835553    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:35.830993    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.831530    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833193    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833865    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.835553    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:35.838868  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:35.838882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:35.864265  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:35.864299  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:35.892689  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:35.892716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:35.952281  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:35.952311  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:38.468021  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:38.478500  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:38.478574  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:38.505131  303437 cri.go:89] found id: ""
	I1210 07:09:38.505156  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.505174  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:38.505197  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:38.505267  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:38.529142  303437 cri.go:89] found id: ""
	I1210 07:09:38.529166  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.529175  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:38.529181  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:38.529239  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:38.554410  303437 cri.go:89] found id: ""
	I1210 07:09:38.554434  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.554442  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:38.554449  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:38.554506  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:38.581372  303437 cri.go:89] found id: ""
	I1210 07:09:38.581395  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.581403  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:38.581409  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:38.581472  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:38.606157  303437 cri.go:89] found id: ""
	I1210 07:09:38.606182  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.606191  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:38.606198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:38.606261  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:38.630691  303437 cri.go:89] found id: ""
	I1210 07:09:38.630717  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.630725  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:38.630731  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:38.630788  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:38.655423  303437 cri.go:89] found id: ""
	I1210 07:09:38.655447  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.655456  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:38.655463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:38.655524  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:38.685788  303437 cri.go:89] found id: ""
	I1210 07:09:38.685814  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.685822  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:38.685832  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:38.685844  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:38.750704  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:38.750740  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:38.764389  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:38.764417  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:38.825803  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:38.818667    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.819289    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.820727    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.821107    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.822534    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:38.818667    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.819289    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.820727    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.821107    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.822534    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:38.825824  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:38.825836  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:38.850907  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:38.850941  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:41.382590  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:41.392996  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:41.393069  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:41.417044  303437 cri.go:89] found id: ""
	I1210 07:09:41.417069  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.417077  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:41.417083  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:41.417146  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:41.442003  303437 cri.go:89] found id: ""
	I1210 07:09:41.442077  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.442107  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:41.442127  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:41.442200  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:41.466958  303437 cri.go:89] found id: ""
	I1210 07:09:41.466985  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.466994  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:41.467000  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:41.467081  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:41.491996  303437 cri.go:89] found id: ""
	I1210 07:09:41.492018  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.492027  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:41.492033  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:41.492093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:41.517865  303437 cri.go:89] found id: ""
	I1210 07:09:41.517890  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.517908  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:41.517929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:41.518012  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:41.544162  303437 cri.go:89] found id: ""
	I1210 07:09:41.544184  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.544193  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:41.544199  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:41.544259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:41.573308  303437 cri.go:89] found id: ""
	I1210 07:09:41.573381  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.573404  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:41.573422  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:41.573502  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:41.602427  303437 cri.go:89] found id: ""
	I1210 07:09:41.602457  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.602467  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:41.602492  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:41.602511  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:41.658769  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:41.658803  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:41.681233  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:41.681259  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:41.747373  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:41.738699    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.739334    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.741375    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.742059    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.744132    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:41.738699    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.739334    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.741375    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.742059    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.744132    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:41.747398  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:41.747411  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:41.772193  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:41.772224  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:44.302640  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:44.313058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:44.313127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:44.341886  303437 cri.go:89] found id: ""
	I1210 07:09:44.341914  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.341929  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:44.341935  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:44.341995  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:44.367439  303437 cri.go:89] found id: ""
	I1210 07:09:44.367460  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.367469  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:44.367475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:44.367532  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:44.391640  303437 cri.go:89] found id: ""
	I1210 07:09:44.391668  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.391678  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:44.391685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:44.391780  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:44.421140  303437 cri.go:89] found id: ""
	I1210 07:09:44.421169  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.421178  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:44.421185  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:44.421263  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:44.444759  303437 cri.go:89] found id: ""
	I1210 07:09:44.444783  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.444792  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:44.444798  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:44.444858  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:44.468926  303437 cri.go:89] found id: ""
	I1210 07:09:44.468959  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.468968  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:44.468978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:44.469045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:44.495556  303437 cri.go:89] found id: ""
	I1210 07:09:44.495581  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.495590  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:44.495597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:44.495676  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:44.519631  303437 cri.go:89] found id: ""
	I1210 07:09:44.519654  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.519663  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:44.519672  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:44.519684  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:44.532940  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:44.532964  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:44.598861  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:44.590948    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.591655    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593344    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593846    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.595521    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:44.590948    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.591655    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593344    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593846    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.595521    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:44.598921  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:44.598950  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:44.624141  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:44.624181  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:44.651186  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:44.651214  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:47.208206  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:47.218613  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:47.218695  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:47.244616  303437 cri.go:89] found id: ""
	I1210 07:09:47.244643  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.244652  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:47.244659  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:47.244717  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:47.270353  303437 cri.go:89] found id: ""
	I1210 07:09:47.270378  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.270387  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:47.270393  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:47.270469  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:47.296082  303437 cri.go:89] found id: ""
	I1210 07:09:47.296108  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.296117  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:47.296123  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:47.296181  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:47.320296  303437 cri.go:89] found id: ""
	I1210 07:09:47.320362  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.320380  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:47.320388  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:47.320459  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:47.345546  303437 cri.go:89] found id: ""
	I1210 07:09:47.345571  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.345580  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:47.345587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:47.345647  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:47.375423  303437 cri.go:89] found id: ""
	I1210 07:09:47.375458  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.375467  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:47.375475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:47.375536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:47.399857  303437 cri.go:89] found id: ""
	I1210 07:09:47.399880  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.399894  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:47.399901  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:47.399963  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:47.431984  303437 cri.go:89] found id: ""
	I1210 07:09:47.432011  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.432019  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:47.432029  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:47.432060  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:47.458214  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:47.458248  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:47.490816  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:47.490843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:47.549328  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:47.549361  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:47.562826  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:47.562855  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:47.624764  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:47.617028    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.617678    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619303    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619812    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.621440    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:47.617028    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.617678    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619303    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619812    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.621440    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:50.125980  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:50.136223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:50.136289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:50.169825  303437 cri.go:89] found id: ""
	I1210 07:09:50.169858  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.169867  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:50.169874  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:50.169966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:50.198977  303437 cri.go:89] found id: ""
	I1210 07:09:50.199000  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.199031  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:50.199039  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:50.199095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:50.235780  303437 cri.go:89] found id: ""
	I1210 07:09:50.235803  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.235811  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:50.235817  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:50.235875  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:50.259548  303437 cri.go:89] found id: ""
	I1210 07:09:50.259570  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.259578  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:50.259585  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:50.259641  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:50.285338  303437 cri.go:89] found id: ""
	I1210 07:09:50.285361  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.285369  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:50.285375  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:50.285432  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:50.310647  303437 cri.go:89] found id: ""
	I1210 07:09:50.310669  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.310678  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:50.310685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:50.310741  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:50.334419  303437 cri.go:89] found id: ""
	I1210 07:09:50.334448  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.334458  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:50.334464  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:50.334521  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:50.359803  303437 cri.go:89] found id: ""
	I1210 07:09:50.359827  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.359837  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:50.359847  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:50.359858  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:50.384958  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:50.384994  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:50.421068  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:50.421093  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:50.477375  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:50.477409  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:50.490923  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:50.490954  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:50.556587  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:50.548374    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.549044    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.550820    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.551415    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.553008    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:50.548374    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.549044    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.550820    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.551415    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.553008    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:53.056876  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:53.067392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:53.067464  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:53.092029  303437 cri.go:89] found id: ""
	I1210 07:09:53.092052  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.092062  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:53.092068  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:53.092125  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:53.118131  303437 cri.go:89] found id: ""
	I1210 07:09:53.118156  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.118165  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:53.118172  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:53.118232  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:53.147375  303437 cri.go:89] found id: ""
	I1210 07:09:53.147398  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.147407  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:53.147413  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:53.147471  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:53.184782  303437 cri.go:89] found id: ""
	I1210 07:09:53.184801  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.184810  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:53.184816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:53.184875  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:53.211867  303437 cri.go:89] found id: ""
	I1210 07:09:53.211892  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.211901  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:53.211908  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:53.211965  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:53.237656  303437 cri.go:89] found id: ""
	I1210 07:09:53.237678  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.237686  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:53.237693  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:53.237761  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:53.262840  303437 cri.go:89] found id: ""
	I1210 07:09:53.262861  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.262870  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:53.262876  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:53.262934  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:53.287214  303437 cri.go:89] found id: ""
	I1210 07:09:53.287235  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.287243  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:53.287252  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:53.287265  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:53.316241  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:53.316267  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:53.371646  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:53.371682  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:53.384755  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:53.384788  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:53.447921  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:53.440066    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.440752    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442394    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442882    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.444521    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:53.440066    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.440752    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442394    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442882    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.444521    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:53.447948  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:53.447961  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:55.973173  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:55.983576  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:55.983656  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:56.011801  303437 cri.go:89] found id: ""
	I1210 07:09:56.011830  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.011840  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:56.011851  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:56.011968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:56.038072  303437 cri.go:89] found id: ""
	I1210 07:09:56.038104  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.038114  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:56.038120  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:56.038198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:56.068512  303437 cri.go:89] found id: ""
	I1210 07:09:56.068586  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.068610  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:56.068629  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:56.068716  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:56.094431  303437 cri.go:89] found id: ""
	I1210 07:09:56.094462  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.094471  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:56.094478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:56.094550  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:56.120840  303437 cri.go:89] found id: ""
	I1210 07:09:56.120865  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.120875  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:56.120881  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:56.120957  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:56.145302  303437 cri.go:89] found id: ""
	I1210 07:09:56.145335  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.145344  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:56.145350  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:56.145415  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:56.177802  303437 cri.go:89] found id: ""
	I1210 07:09:56.177828  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.177837  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:56.177843  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:56.177903  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:56.217508  303437 cri.go:89] found id: ""
	I1210 07:09:56.217535  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.217544  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:56.217553  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:56.217565  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:56.236388  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:56.236414  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:56.299818  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:56.290345    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.291927    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.293053    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.294824    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.295281    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:56.290345    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.291927    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.293053    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.294824    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.295281    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:56.299836  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:56.299849  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:56.324241  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:56.324274  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:56.351770  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:56.351798  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:58.907151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:58.920281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:58.920355  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:58.951789  303437 cri.go:89] found id: ""
	I1210 07:09:58.951887  303437 logs.go:282] 0 containers: []
	W1210 07:09:58.951924  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:58.951955  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:58.952033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:58.988101  303437 cri.go:89] found id: ""
	I1210 07:09:58.988174  303437 logs.go:282] 0 containers: []
	W1210 07:09:58.988200  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:58.988214  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:58.988289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:59.015007  303437 cri.go:89] found id: ""
	I1210 07:09:59.015061  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.015070  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:59.015076  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:59.015145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:59.041267  303437 cri.go:89] found id: ""
	I1210 07:09:59.041290  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.041299  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:59.041305  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:59.041364  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:59.065295  303437 cri.go:89] found id: ""
	I1210 07:09:59.065317  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.065325  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:59.065332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:59.065389  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:59.090688  303437 cri.go:89] found id: ""
	I1210 07:09:59.090710  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.090719  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:59.090735  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:59.090796  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:59.123411  303437 cri.go:89] found id: ""
	I1210 07:09:59.123433  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.123442  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:59.123448  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:59.123507  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:59.148970  303437 cri.go:89] found id: ""
	I1210 07:09:59.148995  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.149003  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:59.149013  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:59.149024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:59.213078  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:59.213112  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:59.229582  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:59.229610  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:59.291341  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:59.283620    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.284364    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.285965    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.286418    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.288009    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:59.283620    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.284364    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.285965    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.286418    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.288009    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:59.291371  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:59.291383  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:59.316302  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:59.316335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:01.843334  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:01.854638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:01.854715  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:01.880761  303437 cri.go:89] found id: ""
	I1210 07:10:01.880783  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.880792  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:01.880802  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:01.880863  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:01.910547  303437 cri.go:89] found id: ""
	I1210 07:10:01.910582  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.910591  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:01.910597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:01.910659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:01.946840  303437 cri.go:89] found id: ""
	I1210 07:10:01.946868  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.946878  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:01.946885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:01.946947  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:01.978924  303437 cri.go:89] found id: ""
	I1210 07:10:01.978961  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.978970  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:01.978976  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:01.979080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:02.019488  303437 cri.go:89] found id: ""
	I1210 07:10:02.019517  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.019536  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:02.019543  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:02.019630  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:02.046286  303437 cri.go:89] found id: ""
	I1210 07:10:02.046307  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.046319  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:02.046325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:02.046390  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:02.072527  303437 cri.go:89] found id: ""
	I1210 07:10:02.072552  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.072562  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:02.072568  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:02.072631  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:02.097399  303437 cri.go:89] found id: ""
	I1210 07:10:02.097421  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.097430  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:02.097440  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:02.097451  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:02.158615  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:02.158651  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:02.174600  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:02.174685  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:02.250555  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:02.241608    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.242681    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.244544    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.245035    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.246871    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:02.241608    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.242681    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.244544    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.245035    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.246871    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:02.250577  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:02.250590  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:02.276945  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:02.276982  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:04.815961  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:04.826415  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:04.826482  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:04.851192  303437 cri.go:89] found id: ""
	I1210 07:10:04.851217  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.851226  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:04.851233  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:04.851295  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:04.880601  303437 cri.go:89] found id: ""
	I1210 07:10:04.880623  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.880632  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:04.880639  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:04.880700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:04.910922  303437 cri.go:89] found id: ""
	I1210 07:10:04.910944  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.910954  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:04.910960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:04.911053  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:04.945097  303437 cri.go:89] found id: ""
	I1210 07:10:04.945122  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.945131  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:04.945137  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:04.945198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:04.976739  303437 cri.go:89] found id: ""
	I1210 07:10:04.976759  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.976768  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:04.976774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:04.976828  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:05.004094  303437 cri.go:89] found id: ""
	I1210 07:10:05.004126  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.004136  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:05.004143  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:05.004221  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:05.031557  303437 cri.go:89] found id: ""
	I1210 07:10:05.031582  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.031591  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:05.031598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:05.031660  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:05.057223  303437 cri.go:89] found id: ""
	I1210 07:10:05.057245  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.057254  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:05.057264  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:05.057277  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:05.070835  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:05.070868  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:05.134682  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:05.126987    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.127646    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129140    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129598    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.131280    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:05.126987    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.127646    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129140    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129598    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.131280    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:05.134701  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:05.134713  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:05.161896  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:05.161984  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:05.199637  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:05.199661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:07.763534  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:07.773915  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:07.773983  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:07.800754  303437 cri.go:89] found id: ""
	I1210 07:10:07.800778  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.800788  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:07.800794  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:07.800856  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:07.826430  303437 cri.go:89] found id: ""
	I1210 07:10:07.826453  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.826462  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:07.826468  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:07.826527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:07.850496  303437 cri.go:89] found id: ""
	I1210 07:10:07.850517  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.850528  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:07.850534  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:07.850592  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:07.875524  303437 cri.go:89] found id: ""
	I1210 07:10:07.875546  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.875555  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:07.875561  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:07.875622  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:07.905072  303437 cri.go:89] found id: ""
	I1210 07:10:07.905094  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.905103  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:07.905109  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:07.905189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:07.936426  303437 cri.go:89] found id: ""
	I1210 07:10:07.936449  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.936457  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:07.936464  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:07.936527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:07.973539  303437 cri.go:89] found id: ""
	I1210 07:10:07.973618  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.973640  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:07.973659  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:07.973772  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:07.999823  303437 cri.go:89] found id: ""
	I1210 07:10:07.999914  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.999941  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:07.999964  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:08.000003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:08.068982  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:08.060765    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062197    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062643    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064190    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064500    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:08.060765    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062197    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062643    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064190    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064500    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:08.069056  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:08.069079  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:08.094318  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:08.094351  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:08.122292  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:08.122320  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:08.184455  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:08.184505  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:10.701562  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:10.711949  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:10.712015  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:10.737041  303437 cri.go:89] found id: ""
	I1210 07:10:10.737068  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.737078  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:10.737085  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:10.737152  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:10.766737  303437 cri.go:89] found id: ""
	I1210 07:10:10.766759  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.766769  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:10.766775  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:10.766833  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:10.795664  303437 cri.go:89] found id: ""
	I1210 07:10:10.795689  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.795698  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:10.795705  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:10.795763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:10.819880  303437 cri.go:89] found id: ""
	I1210 07:10:10.819908  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.819917  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:10.819924  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:10.819986  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:10.843991  303437 cri.go:89] found id: ""
	I1210 07:10:10.844028  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.844037  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:10.844043  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:10.844121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:10.868988  303437 cri.go:89] found id: ""
	I1210 07:10:10.869010  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.869019  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:10.869025  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:10.869088  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:10.893331  303437 cri.go:89] found id: ""
	I1210 07:10:10.893361  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.893371  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:10.893392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:10.893473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:10.925989  303437 cri.go:89] found id: ""
	I1210 07:10:10.926016  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.926025  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:10.926034  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:10.926045  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:10.951381  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:10.951417  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:10.992523  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:10.992547  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:11.048715  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:11.048751  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:11.062864  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:11.062892  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:11.126862  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:11.117747    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.118147    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.119641    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.120260    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.122142    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:11.117747    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.118147    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.119641    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.120260    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.122142    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:13.627173  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:13.640121  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:13.640189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:13.666074  303437 cri.go:89] found id: ""
	I1210 07:10:13.666097  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.666106  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:13.666112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:13.666172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:13.694979  303437 cri.go:89] found id: ""
	I1210 07:10:13.695001  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.695043  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:13.695051  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:13.695110  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:13.719004  303437 cri.go:89] found id: ""
	I1210 07:10:13.719045  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.719054  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:13.719066  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:13.719128  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:13.743528  303437 cri.go:89] found id: ""
	I1210 07:10:13.743592  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.743614  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:13.743627  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:13.743700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:13.773695  303437 cri.go:89] found id: ""
	I1210 07:10:13.773720  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.773737  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:13.773743  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:13.773802  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:13.797583  303437 cri.go:89] found id: ""
	I1210 07:10:13.797605  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.797614  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:13.797620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:13.797678  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:13.825318  303437 cri.go:89] found id: ""
	I1210 07:10:13.825348  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.825357  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:13.825363  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:13.825420  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:13.853561  303437 cri.go:89] found id: ""
	I1210 07:10:13.853585  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.853594  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:13.853604  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:13.853622  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:13.935926  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:13.915703    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.920870    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.921425    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923146    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923621    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:13.915703    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.920870    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.921425    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923146    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923621    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:13.935954  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:13.935967  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:13.962598  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:13.962630  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:13.990458  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:13.990484  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:14.047843  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:14.047880  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:16.562478  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:16.576152  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:16.576222  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:16.604031  303437 cri.go:89] found id: ""
	I1210 07:10:16.604054  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.604063  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:16.604069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:16.604128  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:16.628609  303437 cri.go:89] found id: ""
	I1210 07:10:16.628631  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.628640  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:16.628658  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:16.628717  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:16.653619  303437 cri.go:89] found id: ""
	I1210 07:10:16.653656  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.653665  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:16.653671  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:16.653756  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:16.682568  303437 cri.go:89] found id: ""
	I1210 07:10:16.682604  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.682613  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:16.682620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:16.682693  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:16.707801  303437 cri.go:89] found id: ""
	I1210 07:10:16.707835  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.707845  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:16.707852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:16.707935  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:16.732620  303437 cri.go:89] found id: ""
	I1210 07:10:16.732688  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.732711  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:16.732728  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:16.732825  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:16.758445  303437 cri.go:89] found id: ""
	I1210 07:10:16.758467  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.758475  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:16.758482  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:16.758539  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:16.783975  303437 cri.go:89] found id: ""
	I1210 07:10:16.784001  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.784010  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:16.784019  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:16.784047  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:16.814022  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:16.814049  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:16.869237  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:16.869269  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:16.882654  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:16.882731  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:16.969042  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:16.957319    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.958047    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.960851    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.961625    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.964373    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:16.957319    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.958047    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.960851    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.961625    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.964373    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:16.969064  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:16.969086  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:19.496234  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:19.506951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:19.507093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:19.530611  303437 cri.go:89] found id: ""
	I1210 07:10:19.530643  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.530652  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:19.530658  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:19.530727  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:19.557799  303437 cri.go:89] found id: ""
	I1210 07:10:19.557835  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.557845  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:19.557852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:19.557920  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:19.582933  303437 cri.go:89] found id: ""
	I1210 07:10:19.582967  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.582976  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:19.582983  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:19.583072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:19.607826  303437 cri.go:89] found id: ""
	I1210 07:10:19.607889  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.607909  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:19.607917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:19.607979  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:19.632512  303437 cri.go:89] found id: ""
	I1210 07:10:19.632580  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.632597  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:19.632604  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:19.632665  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:19.657636  303437 cri.go:89] found id: ""
	I1210 07:10:19.657668  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.657677  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:19.657684  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:19.657765  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:19.682353  303437 cri.go:89] found id: ""
	I1210 07:10:19.682423  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.682456  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:19.682476  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:19.682562  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:19.706488  303437 cri.go:89] found id: ""
	I1210 07:10:19.706549  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.706582  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:19.706606  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:19.706644  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:19.719694  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:19.719721  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:19.784893  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:19.777331    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.777928    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.779521    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.780075    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.781604    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:19.777331    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.777928    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.779521    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.780075    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.781604    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:19.784915  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:19.784928  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:19.809606  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:19.809641  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:19.841622  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:19.841657  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:22.397071  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:22.407225  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:22.407298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:22.443280  303437 cri.go:89] found id: ""
	I1210 07:10:22.443304  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.443313  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:22.443320  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:22.443377  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:22.476100  303437 cri.go:89] found id: ""
	I1210 07:10:22.476121  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.476130  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:22.476136  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:22.476197  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:22.504294  303437 cri.go:89] found id: ""
	I1210 07:10:22.504317  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.504326  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:22.504332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:22.504388  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:22.527983  303437 cri.go:89] found id: ""
	I1210 07:10:22.528006  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.528015  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:22.528028  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:22.528085  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:22.552219  303437 cri.go:89] found id: ""
	I1210 07:10:22.552243  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.552252  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:22.552257  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:22.552314  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:22.576437  303437 cri.go:89] found id: ""
	I1210 07:10:22.576459  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.576469  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:22.576475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:22.576530  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:22.601577  303437 cri.go:89] found id: ""
	I1210 07:10:22.601599  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.601608  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:22.601614  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:22.601671  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:22.625855  303437 cri.go:89] found id: ""
	I1210 07:10:22.625878  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.625889  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:22.625899  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:22.625910  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:22.681686  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:22.681732  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:22.695126  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:22.695154  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:22.758688  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:22.751337    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.752263    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.753775    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.754238    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.755632    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:22.751337    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.752263    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.753775    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.754238    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.755632    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:22.758709  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:22.758722  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:22.783636  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:22.783671  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:25.311139  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:25.321885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:25.321968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:25.346177  303437 cri.go:89] found id: ""
	I1210 07:10:25.346257  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.346280  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:25.346299  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:25.346402  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:25.371678  303437 cri.go:89] found id: ""
	I1210 07:10:25.371751  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.371766  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:25.371773  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:25.371836  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:25.404393  303437 cri.go:89] found id: ""
	I1210 07:10:25.404419  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.404436  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:25.404450  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:25.404528  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:25.439726  303437 cri.go:89] found id: ""
	I1210 07:10:25.439766  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.439779  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:25.439803  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:25.439965  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:25.476965  303437 cri.go:89] found id: ""
	I1210 07:10:25.476998  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.477007  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:25.477018  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:25.477127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:25.502342  303437 cri.go:89] found id: ""
	I1210 07:10:25.502369  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.502378  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:25.502385  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:25.502451  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:25.528396  303437 cri.go:89] found id: ""
	I1210 07:10:25.528423  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.528432  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:25.528439  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:25.528543  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:25.555005  303437 cri.go:89] found id: ""
	I1210 07:10:25.555065  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.555074  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:25.555083  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:25.555095  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:25.568421  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:25.568450  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:25.629120  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:25.622083    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.622649    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624105    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624522    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.626001    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:25.622083    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.622649    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624105    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624522    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.626001    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:25.629143  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:25.629155  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:25.654736  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:25.654768  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:25.685404  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:25.685473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:28.247164  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:28.257638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:28.257709  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:28.283706  303437 cri.go:89] found id: ""
	I1210 07:10:28.283729  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.283738  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:28.283744  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:28.283806  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:28.311304  303437 cri.go:89] found id: ""
	I1210 07:10:28.311327  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.311336  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:28.311342  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:28.311407  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:28.336026  303437 cri.go:89] found id: ""
	I1210 07:10:28.336048  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.336056  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:28.336062  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:28.336121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:28.361333  303437 cri.go:89] found id: ""
	I1210 07:10:28.361354  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.361362  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:28.361369  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:28.361428  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:28.389101  303437 cri.go:89] found id: ""
	I1210 07:10:28.389123  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.389132  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:28.389138  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:28.389196  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:28.422619  303437 cri.go:89] found id: ""
	I1210 07:10:28.422641  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.422649  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:28.422656  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:28.422713  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:28.453144  303437 cri.go:89] found id: ""
	I1210 07:10:28.453217  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.453240  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:28.453260  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:28.453347  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:28.483124  303437 cri.go:89] found id: ""
	I1210 07:10:28.483148  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.483158  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:28.483167  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:28.483178  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:28.496766  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:28.496793  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:28.563971  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:28.556200    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.557736    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.558089    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559546    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559812    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:28.556200    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.557736    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.558089    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559546    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559812    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:28.564003  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:28.564015  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:28.588981  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:28.589012  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:28.617971  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:28.618000  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:31.175214  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:31.187495  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:31.187568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:31.221446  303437 cri.go:89] found id: ""
	I1210 07:10:31.221473  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.221482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:31.221488  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:31.221548  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:31.246343  303437 cri.go:89] found id: ""
	I1210 07:10:31.246377  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.246386  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:31.246392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:31.246459  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:31.270266  303437 cri.go:89] found id: ""
	I1210 07:10:31.270289  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.270303  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:31.270309  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:31.270365  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:31.295166  303437 cri.go:89] found id: ""
	I1210 07:10:31.295190  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.295199  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:31.295219  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:31.295284  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:31.320783  303437 cri.go:89] found id: ""
	I1210 07:10:31.320822  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.320831  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:31.320838  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:31.320902  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:31.344885  303437 cri.go:89] found id: ""
	I1210 07:10:31.344910  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.344919  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:31.344927  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:31.344984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:31.369604  303437 cri.go:89] found id: ""
	I1210 07:10:31.369627  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.369636  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:31.369642  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:31.369700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:31.396633  303437 cri.go:89] found id: ""
	I1210 07:10:31.396654  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.396663  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:31.396672  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:31.396685  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:31.458644  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:31.458678  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:31.474603  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:31.474632  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:31.540901  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:31.533490    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.534340    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.535925    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.536236    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.537709    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:31.533490    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.534340    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.535925    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.536236    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.537709    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:31.540921  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:31.540933  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:31.565730  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:31.565763  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:34.098229  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:34.108967  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:34.109037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:34.137131  303437 cri.go:89] found id: ""
	I1210 07:10:34.137153  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.137162  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:34.137168  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:34.137224  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:34.171468  303437 cri.go:89] found id: ""
	I1210 07:10:34.171489  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.171498  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:34.171504  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:34.171565  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:34.199509  303437 cri.go:89] found id: ""
	I1210 07:10:34.199531  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.199539  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:34.199545  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:34.199603  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:34.230270  303437 cri.go:89] found id: ""
	I1210 07:10:34.230292  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.230301  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:34.230308  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:34.230368  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:34.257508  303437 cri.go:89] found id: ""
	I1210 07:10:34.257529  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.257538  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:34.257544  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:34.257598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:34.285487  303437 cri.go:89] found id: ""
	I1210 07:10:34.285509  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.285517  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:34.285524  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:34.285584  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:34.312438  303437 cri.go:89] found id: ""
	I1210 07:10:34.312460  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.312469  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:34.312475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:34.312535  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:34.336063  303437 cri.go:89] found id: ""
	I1210 07:10:34.336137  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.336152  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:34.336161  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:34.336172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:34.392136  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:34.392168  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:34.405661  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:34.405691  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:34.486073  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:34.478058    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.478642    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480446    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480885    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.482506    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:34.478058    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.478642    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480446    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480885    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.482506    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:34.486096  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:34.486110  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:34.512711  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:34.512745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:37.043733  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:37.054272  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:37.054343  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:37.080616  303437 cri.go:89] found id: ""
	I1210 07:10:37.080640  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.080649  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:37.080656  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:37.080716  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:37.104975  303437 cri.go:89] found id: ""
	I1210 07:10:37.105002  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.105010  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:37.105017  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:37.105077  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:37.128929  303437 cri.go:89] found id: ""
	I1210 07:10:37.128952  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.128960  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:37.128966  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:37.129026  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:37.154538  303437 cri.go:89] found id: ""
	I1210 07:10:37.154561  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.154570  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:37.154577  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:37.154637  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:37.183900  303437 cri.go:89] found id: ""
	I1210 07:10:37.183920  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.183928  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:37.183934  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:37.183994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:37.218659  303437 cri.go:89] found id: ""
	I1210 07:10:37.218681  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.218689  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:37.218696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:37.218758  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:37.243786  303437 cri.go:89] found id: ""
	I1210 07:10:37.243808  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.243817  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:37.243824  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:37.243889  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:37.271822  303437 cri.go:89] found id: ""
	I1210 07:10:37.271847  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.271856  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:37.271865  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:37.271877  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:37.327230  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:37.327261  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:37.340728  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:37.340755  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:37.402472  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:37.395392    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.395764    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397322    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397745    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.399404    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:37.395392    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.395764    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397322    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397745    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.399404    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:37.402534  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:37.402560  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:37.428514  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:37.428587  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:39.957676  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:39.968353  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:39.968422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:39.996461  303437 cri.go:89] found id: ""
	I1210 07:10:39.996487  303437 logs.go:282] 0 containers: []
	W1210 07:10:39.996497  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:39.996504  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:39.996572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:40.052529  303437 cri.go:89] found id: ""
	I1210 07:10:40.052553  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.052563  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:40.052570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:40.052635  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:40.083247  303437 cri.go:89] found id: ""
	I1210 07:10:40.083272  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.083282  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:40.083288  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:40.083349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:40.109171  303437 cri.go:89] found id: ""
	I1210 07:10:40.109195  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.109204  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:40.109211  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:40.109271  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:40.138871  303437 cri.go:89] found id: ""
	I1210 07:10:40.138950  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.138972  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:40.138992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:40.139100  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:40.176299  303437 cri.go:89] found id: ""
	I1210 07:10:40.176335  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.176345  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:40.176352  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:40.176448  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:40.213557  303437 cri.go:89] found id: ""
	I1210 07:10:40.213590  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.213600  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:40.213622  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:40.213706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:40.253605  303437 cri.go:89] found id: ""
	I1210 07:10:40.253639  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.253648  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:40.253658  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:40.253670  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:40.289048  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:40.289076  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:40.348311  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:40.348344  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:40.364207  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:40.364249  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:40.431287  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:40.422606    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.423275    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.424961    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.425595    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.427272    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:40.422606    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.423275    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.424961    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.425595    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.427272    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:40.431309  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:40.431325  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:42.962817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:42.973583  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:42.973714  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:43.004181  303437 cri.go:89] found id: ""
	I1210 07:10:43.004211  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.004222  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:43.004235  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:43.004302  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:43.031231  303437 cri.go:89] found id: ""
	I1210 07:10:43.031252  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.031261  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:43.031267  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:43.031324  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:43.056959  303437 cri.go:89] found id: ""
	I1210 07:10:43.056991  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.057002  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:43.057009  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:43.057072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:43.086361  303437 cri.go:89] found id: ""
	I1210 07:10:43.086393  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.086403  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:43.086413  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:43.086481  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:43.112977  303437 cri.go:89] found id: ""
	I1210 07:10:43.113003  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.113013  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:43.113020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:43.113079  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:43.137716  303437 cri.go:89] found id: ""
	I1210 07:10:43.137740  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.137749  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:43.137755  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:43.137814  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:43.173396  303437 cri.go:89] found id: ""
	I1210 07:10:43.173421  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.173431  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:43.173437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:43.173494  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:43.202828  303437 cri.go:89] found id: ""
	I1210 07:10:43.202852  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.202861  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:43.202871  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:43.202885  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:43.265997  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:43.266036  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:43.281547  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:43.281582  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:43.359532  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:43.352125   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.352633   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354207   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354531   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.356009   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:43.352125   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.352633   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354207   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354531   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.356009   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:43.359554  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:43.359567  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:43.392377  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:43.392433  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:45.942739  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:45.955296  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:45.955374  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:45.984462  303437 cri.go:89] found id: ""
	I1210 07:10:45.984488  303437 logs.go:282] 0 containers: []
	W1210 07:10:45.984497  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:45.984507  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:45.984566  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:46.014873  303437 cri.go:89] found id: ""
	I1210 07:10:46.014898  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.014920  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:46.014928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:46.015038  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:46.044539  303437 cri.go:89] found id: ""
	I1210 07:10:46.044565  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.044574  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:46.044581  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:46.044642  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:46.070950  303437 cri.go:89] found id: ""
	I1210 07:10:46.070975  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.070985  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:46.070992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:46.071091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:46.101134  303437 cri.go:89] found id: ""
	I1210 07:10:46.101160  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.101170  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:46.101176  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:46.101255  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:46.126003  303437 cri.go:89] found id: ""
	I1210 07:10:46.126028  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.126037  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:46.126044  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:46.126103  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:46.152209  303437 cri.go:89] found id: ""
	I1210 07:10:46.152231  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.152239  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:46.152245  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:46.152303  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:46.183764  303437 cri.go:89] found id: ""
	I1210 07:10:46.183786  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.183794  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:46.183803  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:46.183813  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:46.248135  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:46.248173  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:46.262749  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:46.262778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:46.330280  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:46.322629   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.323199   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.324997   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.325371   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.326892   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:46.322629   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.323199   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.324997   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.325371   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.326892   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:46.330302  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:46.330315  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:46.356151  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:46.356184  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:48.884130  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:48.894898  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:48.894989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:48.919239  303437 cri.go:89] found id: ""
	I1210 07:10:48.919266  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.919275  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:48.919282  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:48.919343  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:48.946463  303437 cri.go:89] found id: ""
	I1210 07:10:48.946487  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.946497  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:48.946509  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:48.946569  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:48.971661  303437 cri.go:89] found id: ""
	I1210 07:10:48.971735  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.971757  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:48.971772  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:48.971857  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:48.996435  303437 cri.go:89] found id: ""
	I1210 07:10:48.996457  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.996466  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:48.996472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:48.996539  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:49.023269  303437 cri.go:89] found id: ""
	I1210 07:10:49.023296  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.023305  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:49.023311  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:49.023371  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:49.052018  303437 cri.go:89] found id: ""
	I1210 07:10:49.052042  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.052051  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:49.052058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:49.052125  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:49.076866  303437 cri.go:89] found id: ""
	I1210 07:10:49.076929  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.076943  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:49.076951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:49.077009  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:49.105029  303437 cri.go:89] found id: ""
	I1210 07:10:49.105051  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.105061  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:49.105070  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:49.105081  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:49.161025  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:49.161103  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:49.176997  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:49.177065  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:49.246287  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:49.238608   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.239236   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.240909   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.241468   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.243158   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:49.238608   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.239236   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.240909   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.241468   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.243158   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:49.246359  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:49.246386  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:49.271827  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:49.271865  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:51.801611  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:51.812172  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:51.812240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:51.836841  303437 cri.go:89] found id: ""
	I1210 07:10:51.836864  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.836874  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:51.836880  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:51.836942  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:51.860730  303437 cri.go:89] found id: ""
	I1210 07:10:51.860754  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.860764  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:51.860770  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:51.860831  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:51.885358  303437 cri.go:89] found id: ""
	I1210 07:10:51.885379  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.885388  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:51.885394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:51.885452  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:51.909974  303437 cri.go:89] found id: ""
	I1210 07:10:51.910038  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.910062  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:51.910080  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:51.910152  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:51.938488  303437 cri.go:89] found id: ""
	I1210 07:10:51.938553  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.938577  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:51.938596  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:51.938669  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:51.964789  303437 cri.go:89] found id: ""
	I1210 07:10:51.964821  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.964831  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:51.964837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:51.964914  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:51.988457  303437 cri.go:89] found id: ""
	I1210 07:10:51.988478  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.988487  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:51.988493  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:51.988553  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:52.032140  303437 cri.go:89] found id: ""
	I1210 07:10:52.032164  303437 logs.go:282] 0 containers: []
	W1210 07:10:52.032177  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:52.032187  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:52.032198  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:52.058273  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:52.058311  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:52.089897  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:52.089924  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:52.145350  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:52.145387  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:52.162441  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:52.162475  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:52.244944  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:52.233560   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.234752   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.239562   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.240120   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.241681   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:52.233560   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.234752   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.239562   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.240120   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.241681   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:54.746617  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:54.757597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:54.757677  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:54.785180  303437 cri.go:89] found id: ""
	I1210 07:10:54.785205  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.785215  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:54.785222  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:54.785283  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:54.813159  303437 cri.go:89] found id: ""
	I1210 07:10:54.813184  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.813193  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:54.813200  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:54.813258  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:54.840481  303437 cri.go:89] found id: ""
	I1210 07:10:54.840503  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.840512  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:54.840519  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:54.840578  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:54.869478  303437 cri.go:89] found id: ""
	I1210 07:10:54.869500  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.869509  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:54.869516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:54.869573  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:54.892998  303437 cri.go:89] found id: ""
	I1210 07:10:54.893020  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.893028  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:54.893034  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:54.893093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:54.921729  303437 cri.go:89] found id: ""
	I1210 07:10:54.921755  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.921765  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:54.921772  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:54.921838  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:54.946951  303437 cri.go:89] found id: ""
	I1210 07:10:54.946976  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.946985  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:54.946992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:54.947069  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:54.972444  303437 cri.go:89] found id: ""
	I1210 07:10:54.972466  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.972475  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:54.972484  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:54.972502  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:54.997696  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:54.997743  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:55.038495  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:55.038532  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:55.099784  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:55.099825  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:55.115531  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:55.115561  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:55.193319  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:55.183569   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.187128   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188202   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188540   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.190127   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:55.183569   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.187128   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188202   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188540   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.190127   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:57.693558  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:57.704587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:57.704698  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:57.733113  303437 cri.go:89] found id: ""
	I1210 07:10:57.733137  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.733147  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:57.733154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:57.733217  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:57.759697  303437 cri.go:89] found id: ""
	I1210 07:10:57.759721  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.759730  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:57.759736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:57.759813  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:57.785244  303437 cri.go:89] found id: ""
	I1210 07:10:57.785273  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.785282  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:57.785288  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:57.785349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:57.819299  303437 cri.go:89] found id: ""
	I1210 07:10:57.819324  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.819333  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:57.819339  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:57.819397  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:57.843698  303437 cri.go:89] found id: ""
	I1210 07:10:57.843720  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.843729  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:57.843736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:57.843797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:57.867903  303437 cri.go:89] found id: ""
	I1210 07:10:57.867928  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.867938  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:57.867944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:57.868003  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:57.892038  303437 cri.go:89] found id: ""
	I1210 07:10:57.892065  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.892074  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:57.892080  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:57.892144  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:57.917032  303437 cri.go:89] found id: ""
	I1210 07:10:57.917055  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.917064  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:57.917073  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:57.917084  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:57.972772  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:57.972808  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:57.986446  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:57.986475  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:58.053540  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:58.045445   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.046514   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.047201   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.048794   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.049108   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:58.045445   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.046514   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.047201   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.048794   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.049108   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:58.053559  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:58.053572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:58.078999  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:58.079080  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:00.609346  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:00.620922  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:00.620998  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:00.647744  303437 cri.go:89] found id: ""
	I1210 07:11:00.647766  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.647775  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:00.647781  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:00.647838  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:00.685141  303437 cri.go:89] found id: ""
	I1210 07:11:00.685162  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.685171  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:00.685177  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:00.685237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:00.713949  303437 cri.go:89] found id: ""
	I1210 07:11:00.713971  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.713980  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:00.713986  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:00.714045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:00.740428  303437 cri.go:89] found id: ""
	I1210 07:11:00.740453  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.740463  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:00.740471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:00.740531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:00.765430  303437 cri.go:89] found id: ""
	I1210 07:11:00.765455  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.765464  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:00.765471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:00.765529  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:00.790771  303437 cri.go:89] found id: ""
	I1210 07:11:00.790797  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.790806  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:00.790813  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:00.790871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:00.817430  303437 cri.go:89] found id: ""
	I1210 07:11:00.817456  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.817465  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:00.817471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:00.817531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:00.841761  303437 cri.go:89] found id: ""
	I1210 07:11:00.841785  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.841794  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:00.841803  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:00.841817  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:00.855324  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:00.855351  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:00.926358  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:00.918056   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.918855   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.920644   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.921186   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.922891   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:00.918056   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.918855   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.920644   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.921186   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.922891   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:00.926380  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:00.926394  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:00.951644  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:00.951678  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:00.979845  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:00.979875  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:03.540927  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:03.551392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:03.551462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:03.576792  303437 cri.go:89] found id: ""
	I1210 07:11:03.576821  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.576830  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:03.576837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:03.576896  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:03.601193  303437 cri.go:89] found id: ""
	I1210 07:11:03.601216  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.601225  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:03.601233  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:03.601290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:03.626528  303437 cri.go:89] found id: ""
	I1210 07:11:03.626550  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.626559  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:03.626565  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:03.626624  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:03.656106  303437 cri.go:89] found id: ""
	I1210 07:11:03.656128  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.656137  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:03.656149  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:03.656206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:03.691936  303437 cri.go:89] found id: ""
	I1210 07:11:03.691960  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.691970  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:03.691976  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:03.692037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:03.721295  303437 cri.go:89] found id: ""
	I1210 07:11:03.721321  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.721331  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:03.721338  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:03.721409  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:03.750080  303437 cri.go:89] found id: ""
	I1210 07:11:03.750105  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.750114  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:03.750121  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:03.750205  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:03.777748  303437 cri.go:89] found id: ""
	I1210 07:11:03.777771  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.777780  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:03.777815  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:03.777836  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:03.792128  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:03.792159  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:03.859337  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:03.851981   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.852547   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854609   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.856112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:03.851981   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.852547   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854609   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.856112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:03.859358  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:03.859371  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:03.885445  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:03.885482  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:03.915897  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:03.915925  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:06.473632  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:06.484351  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:06.484431  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:06.509957  303437 cri.go:89] found id: ""
	I1210 07:11:06.509982  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.509991  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:06.509997  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:06.510061  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:06.537150  303437 cri.go:89] found id: ""
	I1210 07:11:06.537175  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.537185  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:06.537195  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:06.537255  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:06.571765  303437 cri.go:89] found id: ""
	I1210 07:11:06.571789  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.571798  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:06.571804  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:06.571872  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:06.600905  303437 cri.go:89] found id: ""
	I1210 07:11:06.600928  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.600938  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:06.600944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:06.601007  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:06.625296  303437 cri.go:89] found id: ""
	I1210 07:11:06.625320  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.625329  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:06.625335  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:06.625396  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:06.653467  303437 cri.go:89] found id: ""
	I1210 07:11:06.653490  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.653499  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:06.653505  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:06.653563  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:06.693284  303437 cri.go:89] found id: ""
	I1210 07:11:06.693309  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.693319  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:06.693325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:06.693385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:06.731038  303437 cri.go:89] found id: ""
	I1210 07:11:06.731061  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.731069  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:06.731079  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:06.731091  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:06.744632  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:06.744661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:06.805649  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:06.797284   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.798135   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.799790   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.800106   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.801600   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:06.797284   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.798135   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.799790   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.800106   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.801600   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:06.805675  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:06.805697  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:06.830881  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:06.830917  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:06.859403  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:06.859429  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:09.415956  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:09.428117  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:09.428237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:09.457364  303437 cri.go:89] found id: ""
	I1210 07:11:09.457426  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.457457  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:09.457478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:09.457570  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:09.487281  303437 cri.go:89] found id: ""
	I1210 07:11:09.487343  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.487375  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:09.487395  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:09.487481  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:09.512841  303437 cri.go:89] found id: ""
	I1210 07:11:09.512912  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.512945  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:09.512964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:09.513056  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:09.538740  303437 cri.go:89] found id: ""
	I1210 07:11:09.538824  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.538855  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:09.538885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:09.538979  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:09.566651  303437 cri.go:89] found id: ""
	I1210 07:11:09.566692  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.566718  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:09.566732  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:09.566811  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:09.591707  303437 cri.go:89] found id: ""
	I1210 07:11:09.591782  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.591798  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:09.591808  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:09.591866  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:09.620542  303437 cri.go:89] found id: ""
	I1210 07:11:09.620568  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.620577  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:09.620584  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:09.620642  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:09.649059  303437 cri.go:89] found id: ""
	I1210 07:11:09.649082  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.649091  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:09.649100  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:09.649111  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:09.674480  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:09.674512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:09.715383  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:09.715410  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:09.775480  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:09.775512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:09.788719  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:09.788798  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:09.855981  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:09.848870   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.849294   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.850550   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.851138   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.852720   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:09.848870   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.849294   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.850550   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.851138   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.852720   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:12.356259  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:12.366697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:12.366763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:12.390732  303437 cri.go:89] found id: ""
	I1210 07:11:12.390756  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.390764  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:12.390771  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:12.390826  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:12.430569  303437 cri.go:89] found id: ""
	I1210 07:11:12.430619  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.430631  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:12.430638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:12.430704  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:12.477376  303437 cri.go:89] found id: ""
	I1210 07:11:12.477398  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.477406  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:12.477412  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:12.477483  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:12.503110  303437 cri.go:89] found id: ""
	I1210 07:11:12.503132  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.503140  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:12.503147  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:12.503206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:12.527661  303437 cri.go:89] found id: ""
	I1210 07:11:12.527683  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.527691  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:12.527698  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:12.527757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:12.552603  303437 cri.go:89] found id: ""
	I1210 07:11:12.552624  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.552632  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:12.552639  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:12.552701  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:12.576969  303437 cri.go:89] found id: ""
	I1210 07:11:12.576991  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.576999  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:12.577005  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:12.577074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:12.602537  303437 cri.go:89] found id: ""
	I1210 07:11:12.602559  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.602568  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:12.602577  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:12.602589  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:12.660382  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:12.660462  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:12.675575  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:12.675600  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:12.748937  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:12.741330   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.741988   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.743656   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.744158   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.745748   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:12.741330   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.741988   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.743656   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.744158   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.745748   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:12.748957  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:12.748970  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:12.773717  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:12.773752  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:15.305384  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:15.315713  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:15.315783  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:15.340655  303437 cri.go:89] found id: ""
	I1210 07:11:15.340678  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.340687  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:15.340693  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:15.340757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:15.366091  303437 cri.go:89] found id: ""
	I1210 07:11:15.366115  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.366123  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:15.366130  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:15.366187  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:15.392837  303437 cri.go:89] found id: ""
	I1210 07:11:15.392862  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.392871  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:15.392877  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:15.392939  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:15.435313  303437 cri.go:89] found id: ""
	I1210 07:11:15.435340  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.435349  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:15.435356  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:15.435422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:15.466475  303437 cri.go:89] found id: ""
	I1210 07:11:15.466500  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.466509  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:15.466516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:15.466575  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:15.497149  303437 cri.go:89] found id: ""
	I1210 07:11:15.497175  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.497184  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:15.497191  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:15.497250  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:15.523660  303437 cri.go:89] found id: ""
	I1210 07:11:15.523725  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.523741  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:15.523748  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:15.523808  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:15.547943  303437 cri.go:89] found id: ""
	I1210 07:11:15.547971  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.547987  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:15.547996  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:15.548007  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:15.603029  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:15.603064  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:15.616115  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:15.616150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:15.696616  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:15.686858   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.687579   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689227   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689725   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.693083   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:15.686858   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.687579   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689227   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689725   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.693083   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:15.696637  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:15.696660  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:15.728162  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:15.728212  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:18.262884  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:18.273396  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:18.273467  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:18.298776  303437 cri.go:89] found id: ""
	I1210 07:11:18.298799  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.298809  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:18.298816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:18.298873  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:18.326358  303437 cri.go:89] found id: ""
	I1210 07:11:18.326431  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.326444  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:18.326472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:18.326567  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:18.351094  303437 cri.go:89] found id: ""
	I1210 07:11:18.351116  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.351125  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:18.351132  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:18.351190  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:18.376189  303437 cri.go:89] found id: ""
	I1210 07:11:18.376211  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.376220  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:18.376227  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:18.376283  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:18.400127  303437 cri.go:89] found id: ""
	I1210 07:11:18.400151  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.400160  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:18.400166  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:18.400231  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:18.429089  303437 cri.go:89] found id: ""
	I1210 07:11:18.429160  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.429173  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:18.429181  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:18.429304  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:18.462081  303437 cri.go:89] found id: ""
	I1210 07:11:18.462162  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.462174  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:18.462202  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:18.462289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:18.490007  303437 cri.go:89] found id: ""
	I1210 07:11:18.490081  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.490105  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:18.490128  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:18.490164  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:18.506325  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:18.506400  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:18.582081  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:18.572894   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.573949   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.574774   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.576605   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.577188   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:18.572894   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.573949   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.574774   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.576605   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.577188   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:18.582154  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:18.582194  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:18.608014  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:18.608047  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:18.637797  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:18.637826  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:21.198374  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:21.208690  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:21.208757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:21.235678  303437 cri.go:89] found id: ""
	I1210 07:11:21.235701  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.235710  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:21.235723  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:21.235788  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:21.259648  303437 cri.go:89] found id: ""
	I1210 07:11:21.259671  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.259679  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:21.259685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:21.259742  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:21.284541  303437 cri.go:89] found id: ""
	I1210 07:11:21.284562  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.284571  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:21.284577  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:21.284634  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:21.309347  303437 cri.go:89] found id: ""
	I1210 07:11:21.309371  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.309380  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:21.309386  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:21.309449  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:21.337308  303437 cri.go:89] found id: ""
	I1210 07:11:21.337377  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.337397  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:21.337414  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:21.337498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:21.362600  303437 cri.go:89] found id: ""
	I1210 07:11:21.362622  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.362631  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:21.362637  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:21.362706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:21.386909  303437 cri.go:89] found id: ""
	I1210 07:11:21.386934  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.386951  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:21.386959  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:21.387045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:21.444294  303437 cri.go:89] found id: ""
	I1210 07:11:21.444331  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.444340  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:21.444350  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:21.444361  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:21.537630  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:21.526461   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.527437   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.531792   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.532470   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.534191   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:21.526461   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.527437   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.531792   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.532470   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.534191   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:21.537650  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:21.537744  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:21.567303  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:21.567339  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:21.599305  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:21.599333  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:21.660956  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:21.660989  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:24.197663  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:24.209532  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:24.209604  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:24.235185  303437 cri.go:89] found id: ""
	I1210 07:11:24.235207  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.235215  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:24.235222  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:24.235291  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:24.269486  303437 cri.go:89] found id: ""
	I1210 07:11:24.269507  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.269515  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:24.269522  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:24.269580  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:24.295987  303437 cri.go:89] found id: ""
	I1210 07:11:24.296010  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.296018  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:24.296024  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:24.296080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:24.321843  303437 cri.go:89] found id: ""
	I1210 07:11:24.321918  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.321932  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:24.321939  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:24.322070  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:24.349226  303437 cri.go:89] found id: ""
	I1210 07:11:24.349296  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.349309  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:24.349316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:24.349439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:24.382513  303437 cri.go:89] found id: ""
	I1210 07:11:24.382595  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.382617  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:24.382636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:24.382759  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:24.423211  303437 cri.go:89] found id: ""
	I1210 07:11:24.423284  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.423306  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:24.423325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:24.423413  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:24.483751  303437 cri.go:89] found id: ""
	I1210 07:11:24.483774  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.483783  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:24.483792  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:24.483831  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:24.554712  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:24.547245   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.548134   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.549755   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.550089   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.551593   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:24.547245   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.548134   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.549755   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.550089   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.551593   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:24.554746  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:24.554759  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:24.583135  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:24.583172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:24.621794  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:24.621824  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:24.686891  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:24.686927  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:27.212817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:27.223470  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:27.223540  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:27.250394  303437 cri.go:89] found id: ""
	I1210 07:11:27.250421  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.250431  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:27.250437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:27.250497  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:27.275076  303437 cri.go:89] found id: ""
	I1210 07:11:27.275099  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.275108  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:27.275114  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:27.275175  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:27.300285  303437 cri.go:89] found id: ""
	I1210 07:11:27.300311  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.300321  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:27.300327  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:27.300389  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:27.324870  303437 cri.go:89] found id: ""
	I1210 07:11:27.324894  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.324904  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:27.324910  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:27.324976  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:27.351041  303437 cri.go:89] found id: ""
	I1210 07:11:27.351063  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.351072  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:27.351079  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:27.351145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:27.375920  303437 cri.go:89] found id: ""
	I1210 07:11:27.375942  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.375950  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:27.375957  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:27.376016  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:27.400149  303437 cri.go:89] found id: ""
	I1210 07:11:27.400174  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.400183  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:27.400190  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:27.400248  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:27.436160  303437 cri.go:89] found id: ""
	I1210 07:11:27.436192  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.436201  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:27.436211  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:27.436222  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:27.498671  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:27.498704  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:27.512854  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:27.512880  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:27.582038  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:27.573895   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.574782   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576306   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576889   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.578645   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:27.573895   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.574782   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576306   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576889   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.578645   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:27.582102  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:27.582129  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:27.610246  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:27.610287  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:30.139493  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:30.150290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:30.150358  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:30.176970  303437 cri.go:89] found id: ""
	I1210 07:11:30.177000  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.177008  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:30.177015  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:30.177079  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:30.202200  303437 cri.go:89] found id: ""
	I1210 07:11:30.202226  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.202235  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:30.202241  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:30.202300  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:30.226724  303437 cri.go:89] found id: ""
	I1210 07:11:30.226748  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.226757  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:30.226763  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:30.226825  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:30.251813  303437 cri.go:89] found id: ""
	I1210 07:11:30.251835  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.251844  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:30.251850  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:30.251912  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:30.277078  303437 cri.go:89] found id: ""
	I1210 07:11:30.277099  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.277109  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:30.277115  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:30.277172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:30.305998  303437 cri.go:89] found id: ""
	I1210 07:11:30.306019  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.306027  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:30.306034  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:30.306091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:30.334810  303437 cri.go:89] found id: ""
	I1210 07:11:30.334831  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.334839  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:30.334846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:30.334903  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:30.359892  303437 cri.go:89] found id: ""
	I1210 07:11:30.359913  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.359921  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:30.359930  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:30.359940  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:30.385054  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:30.385088  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:30.421360  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:30.421390  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:30.485019  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:30.485051  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:30.498844  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:30.498916  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:30.560538  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:30.552697   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.553538   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.554465   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.555942   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.556285   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:30.552697   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.553538   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.554465   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.555942   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.556285   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:33.062385  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:33.073083  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:33.073165  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:33.097439  303437 cri.go:89] found id: ""
	I1210 07:11:33.097463  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.097471  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:33.097478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:33.097540  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:33.124732  303437 cri.go:89] found id: ""
	I1210 07:11:33.124754  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.124763  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:33.124769  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:33.124829  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:33.153513  303437 cri.go:89] found id: ""
	I1210 07:11:33.153536  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.153545  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:33.153550  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:33.153610  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:33.179491  303437 cri.go:89] found id: ""
	I1210 07:11:33.179518  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.179526  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:33.179533  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:33.179593  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:33.205039  303437 cri.go:89] found id: ""
	I1210 07:11:33.205232  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.205248  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:33.205255  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:33.205332  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:33.231637  303437 cri.go:89] found id: ""
	I1210 07:11:33.231661  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.231670  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:33.231677  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:33.231740  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:33.257596  303437 cri.go:89] found id: ""
	I1210 07:11:33.257622  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.257630  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:33.257636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:33.257702  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:33.283943  303437 cri.go:89] found id: ""
	I1210 07:11:33.283968  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.283978  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:33.283989  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:33.284003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:33.297130  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:33.297162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:33.358971  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:33.351484   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.351972   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.353499   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.354087   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.355603   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:33.351484   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.351972   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.353499   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.354087   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.355603   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:33.359004  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:33.359053  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:33.383559  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:33.383593  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:33.411160  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:33.411184  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:35.975172  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:35.985598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:35.985677  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:36.012649  303437 cri.go:89] found id: ""
	I1210 07:11:36.012687  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.012698  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:36.012705  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:36.012772  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:36.039233  303437 cri.go:89] found id: ""
	I1210 07:11:36.039301  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.039325  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:36.039344  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:36.039440  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:36.064743  303437 cri.go:89] found id: ""
	I1210 07:11:36.064766  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.064775  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:36.064781  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:36.064839  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:36.088939  303437 cri.go:89] found id: ""
	I1210 07:11:36.088961  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.088969  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:36.088975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:36.089037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:36.116797  303437 cri.go:89] found id: ""
	I1210 07:11:36.116821  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.116830  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:36.116836  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:36.116894  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:36.141419  303437 cri.go:89] found id: ""
	I1210 07:11:36.141447  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.141456  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:36.141463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:36.141525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:36.166138  303437 cri.go:89] found id: ""
	I1210 07:11:36.166165  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.166174  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:36.166180  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:36.166242  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:36.193939  303437 cri.go:89] found id: ""
	I1210 07:11:36.194014  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.194036  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:36.194058  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:36.194096  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:36.250476  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:36.250507  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:36.263989  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:36.264070  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:36.328452  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:36.320900   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.321474   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323175   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323732   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.325316   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:36.320900   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.321474   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323175   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323732   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.325316   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:36.328474  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:36.328487  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:36.353490  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:36.353523  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:38.890866  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:38.901365  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:38.901464  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:38.932423  303437 cri.go:89] found id: ""
	I1210 07:11:38.932450  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.932458  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:38.932465  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:38.932525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:38.959879  303437 cri.go:89] found id: ""
	I1210 07:11:38.959907  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.959915  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:38.959921  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:38.959978  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:38.986312  303437 cri.go:89] found id: ""
	I1210 07:11:38.986338  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.986347  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:38.986353  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:38.986410  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:39.011808  303437 cri.go:89] found id: ""
	I1210 07:11:39.011830  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.011839  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:39.011845  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:39.011908  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:39.037634  303437 cri.go:89] found id: ""
	I1210 07:11:39.037675  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.037685  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:39.037691  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:39.037763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:39.062989  303437 cri.go:89] found id: ""
	I1210 07:11:39.063073  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.063096  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:39.063114  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:39.063200  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:39.092710  303437 cri.go:89] found id: ""
	I1210 07:11:39.092732  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.092740  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:39.092749  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:39.092809  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:39.116692  303437 cri.go:89] found id: ""
	I1210 07:11:39.116715  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.116724  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:39.116735  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:39.116745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:39.173134  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:39.173165  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:39.187543  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:39.187619  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:39.248942  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:39.241567   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.242335   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.243953   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.244270   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.245769   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:39.241567   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.242335   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.243953   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.244270   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.245769   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:39.248964  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:39.248976  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:39.273536  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:39.273572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:41.801091  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:41.812394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:41.812473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:41.838936  303437 cri.go:89] found id: ""
	I1210 07:11:41.839028  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.839042  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:41.839050  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:41.839131  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:41.864566  303437 cri.go:89] found id: ""
	I1210 07:11:41.864593  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.864603  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:41.864609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:41.864673  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:41.889296  303437 cri.go:89] found id: ""
	I1210 07:11:41.889321  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.889330  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:41.889337  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:41.889396  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:41.915562  303437 cri.go:89] found id: ""
	I1210 07:11:41.915589  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.915601  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:41.915608  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:41.915670  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:41.953369  303437 cri.go:89] found id: ""
	I1210 07:11:41.953395  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.953404  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:41.953410  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:41.953473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:41.985179  303437 cri.go:89] found id: ""
	I1210 07:11:41.985205  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.985216  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:41.985223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:41.985327  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:42.015327  303437 cri.go:89] found id: ""
	I1210 07:11:42.015400  303437 logs.go:282] 0 containers: []
	W1210 07:11:42.015424  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:42.015443  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:42.015541  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:42.043382  303437 cri.go:89] found id: ""
	I1210 07:11:42.043407  303437 logs.go:282] 0 containers: []
	W1210 07:11:42.043421  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:42.043431  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:42.043443  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:42.080163  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:42.080196  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:42.139896  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:42.139935  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:42.156701  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:42.156737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:42.234579  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:42.225807   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.226427   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228068   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228448   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.229958   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:42.225807   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.226427   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228068   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228448   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.229958   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:42.234662  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:42.234691  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:44.763362  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:44.773978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:44.774048  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:44.799637  303437 cri.go:89] found id: ""
	I1210 07:11:44.799665  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.799674  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:44.799680  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:44.799741  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:44.827772  303437 cri.go:89] found id: ""
	I1210 07:11:44.827797  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.827806  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:44.827812  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:44.827871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:44.851977  303437 cri.go:89] found id: ""
	I1210 07:11:44.852005  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.852014  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:44.852020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:44.852080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:44.876554  303437 cri.go:89] found id: ""
	I1210 07:11:44.876580  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.876590  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:44.876596  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:44.876658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:44.903100  303437 cri.go:89] found id: ""
	I1210 07:11:44.903132  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.903141  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:44.903154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:44.903215  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:44.933312  303437 cri.go:89] found id: ""
	I1210 07:11:44.933333  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.933342  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:44.933348  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:44.933407  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:44.969458  303437 cri.go:89] found id: ""
	I1210 07:11:44.969530  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.969552  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:44.969569  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:44.969666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:45.013288  303437 cri.go:89] found id: ""
	I1210 07:11:45.013381  303437 logs.go:282] 0 containers: []
	W1210 07:11:45.013403  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:45.013427  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:45.013468  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:45.111594  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:45.112597  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:45.131602  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:45.131636  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:45.220807  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:45.205512   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.206557   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.208854   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.209321   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.215241   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:45.205512   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.206557   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.208854   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.209321   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.215241   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:45.220830  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:45.220843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:45.257708  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:45.257752  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:47.792395  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:47.802865  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:47.802937  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:47.832152  303437 cri.go:89] found id: ""
	I1210 07:11:47.832175  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.832191  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:47.832198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:47.832262  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:47.856843  303437 cri.go:89] found id: ""
	I1210 07:11:47.856868  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.856877  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:47.856883  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:47.856943  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:47.880564  303437 cri.go:89] found id: ""
	I1210 07:11:47.880586  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.880595  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:47.880601  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:47.880658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:47.908243  303437 cri.go:89] found id: ""
	I1210 07:11:47.908264  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.908273  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:47.908280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:47.908337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:47.951940  303437 cri.go:89] found id: ""
	I1210 07:11:47.951961  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.951969  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:47.951975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:47.952033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:47.986418  303437 cri.go:89] found id: ""
	I1210 07:11:47.986437  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.986446  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:47.986452  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:47.986511  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:48.018032  303437 cri.go:89] found id: ""
	I1210 07:11:48.018055  303437 logs.go:282] 0 containers: []
	W1210 07:11:48.018064  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:48.018069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:48.018131  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:48.045010  303437 cri.go:89] found id: ""
	I1210 07:11:48.045033  303437 logs.go:282] 0 containers: []
	W1210 07:11:48.045043  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:48.045052  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:48.045063  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:48.070773  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:48.070806  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:48.100419  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:48.100451  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:48.157253  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:48.157287  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:48.171891  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:48.171922  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:48.236843  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:48.228588   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.229356   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231115   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231743   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.233451   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:48.228588   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.229356   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231115   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231743   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.233451   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:50.738489  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:50.749165  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:50.749232  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:50.774993  303437 cri.go:89] found id: ""
	I1210 07:11:50.775032  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.775042  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:50.775049  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:50.775108  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:50.800355  303437 cri.go:89] found id: ""
	I1210 07:11:50.800380  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.800389  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:50.800396  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:50.800455  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:50.825116  303437 cri.go:89] found id: ""
	I1210 07:11:50.825139  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.825148  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:50.825154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:50.825216  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:50.852419  303437 cri.go:89] found id: ""
	I1210 07:11:50.852441  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.852449  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:50.852455  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:50.852513  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:50.877502  303437 cri.go:89] found id: ""
	I1210 07:11:50.877522  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.877531  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:50.877537  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:50.877594  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:50.905139  303437 cri.go:89] found id: ""
	I1210 07:11:50.905161  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.905171  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:50.905177  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:50.905237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:50.933267  303437 cri.go:89] found id: ""
	I1210 07:11:50.933291  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.933299  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:50.933305  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:50.933364  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:50.961246  303437 cri.go:89] found id: ""
	I1210 07:11:50.961267  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.961276  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:50.961285  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:50.961296  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:50.989123  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:50.989149  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:51.046128  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:51.046168  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:51.060977  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:51.061014  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:51.126917  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:51.119326   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.119907   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.121515   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.122000   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.123466   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:51.119326   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.119907   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.121515   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.122000   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.123466   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:51.126938  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:51.126951  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:53.652260  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:53.662761  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:53.662827  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:53.692655  303437 cri.go:89] found id: ""
	I1210 07:11:53.692728  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.692755  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:53.692773  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:53.692852  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:53.726710  303437 cri.go:89] found id: ""
	I1210 07:11:53.726743  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.726752  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:53.726758  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:53.726816  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:53.751772  303437 cri.go:89] found id: ""
	I1210 07:11:53.751793  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.751802  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:53.751808  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:53.751867  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:53.776281  303437 cri.go:89] found id: ""
	I1210 07:11:53.776347  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.776371  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:53.776391  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:53.776475  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:53.801234  303437 cri.go:89] found id: ""
	I1210 07:11:53.801259  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.801268  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:53.801275  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:53.801330  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:53.830240  303437 cri.go:89] found id: ""
	I1210 07:11:53.830265  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.830273  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:53.830280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:53.830341  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:53.855035  303437 cri.go:89] found id: ""
	I1210 07:11:53.855059  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.855069  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:53.855075  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:53.855140  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:53.883359  303437 cri.go:89] found id: ""
	I1210 07:11:53.883384  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.883401  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:53.883411  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:53.883423  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:53.923136  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:53.923215  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:53.985138  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:53.985172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:53.999740  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:53.999775  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:54.066156  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:54.058409   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.059038   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.060575   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.061111   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.062782   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:54.058409   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.059038   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.060575   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.061111   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.062782   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:54.066181  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:54.066194  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:56.591475  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:56.601960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:56.602033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:56.626286  303437 cri.go:89] found id: ""
	I1210 07:11:56.626311  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.626320  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:56.626327  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:56.626385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:56.650098  303437 cri.go:89] found id: ""
	I1210 07:11:56.650124  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.650133  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:56.650139  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:56.650201  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:56.677542  303437 cri.go:89] found id: ""
	I1210 07:11:56.677569  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.677578  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:56.677584  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:56.677659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:56.709405  303437 cri.go:89] found id: ""
	I1210 07:11:56.709430  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.709439  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:56.709446  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:56.709508  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:56.739179  303437 cri.go:89] found id: ""
	I1210 07:11:56.739204  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.739212  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:56.739219  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:56.739277  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:56.766584  303437 cri.go:89] found id: ""
	I1210 07:11:56.766609  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.766618  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:56.766624  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:56.766691  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:56.791703  303437 cri.go:89] found id: ""
	I1210 07:11:56.791729  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.791739  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:56.791745  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:56.791809  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:56.817298  303437 cri.go:89] found id: ""
	I1210 07:11:56.817325  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.817334  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:56.817344  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:56.817355  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:56.875173  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:56.875210  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:56.889120  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:56.889146  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:56.984238  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:56.976825   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.977286   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.978762   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.979412   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.980881   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:56.976825   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.977286   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.978762   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.979412   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.980881   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:56.984258  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:56.984270  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:57.011593  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:57.011627  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:59.548660  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:59.559203  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:59.559272  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:59.584024  303437 cri.go:89] found id: ""
	I1210 07:11:59.584091  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.584113  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:59.584131  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:59.584223  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:59.609283  303437 cri.go:89] found id: ""
	I1210 07:11:59.609307  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.609316  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:59.609325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:59.609385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:59.633912  303437 cri.go:89] found id: ""
	I1210 07:11:59.633935  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.633944  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:59.633951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:59.634012  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:59.660339  303437 cri.go:89] found id: ""
	I1210 07:11:59.660365  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.660373  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:59.660380  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:59.660437  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:59.697302  303437 cri.go:89] found id: ""
	I1210 07:11:59.697329  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.697342  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:59.697348  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:59.697410  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:59.733379  303437 cri.go:89] found id: ""
	I1210 07:11:59.733402  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.733411  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:59.733418  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:59.733488  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:59.758324  303437 cri.go:89] found id: ""
	I1210 07:11:59.758350  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.758360  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:59.758366  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:59.758423  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:59.788265  303437 cri.go:89] found id: ""
	I1210 07:11:59.788304  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.788313  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:59.788323  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:59.788335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:59.816310  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:59.816335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:59.875191  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:59.875227  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:59.888706  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:59.888737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:59.964581  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:59.957369   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.958105   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959640   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959946   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.961394   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:59.957369   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.958105   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959640   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959946   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.961394   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:59.964604  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:59.964617  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:02.490529  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:02.501579  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:02.501655  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:02.530852  303437 cri.go:89] found id: ""
	I1210 07:12:02.530876  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.530885  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:02.530894  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:02.530955  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:02.561336  303437 cri.go:89] found id: ""
	I1210 07:12:02.561361  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.561370  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:02.561377  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:02.561434  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:02.585933  303437 cri.go:89] found id: ""
	I1210 07:12:02.585963  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.585972  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:02.585979  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:02.586040  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:02.611097  303437 cri.go:89] found id: ""
	I1210 07:12:02.611122  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.611131  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:02.611137  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:02.611199  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:02.637900  303437 cri.go:89] found id: ""
	I1210 07:12:02.637925  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.637934  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:02.637941  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:02.638002  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:02.669431  303437 cri.go:89] found id: ""
	I1210 07:12:02.669457  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.669467  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:02.669474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:02.669536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:02.704940  303437 cri.go:89] found id: ""
	I1210 07:12:02.704967  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.704976  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:02.704983  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:02.705044  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:02.733218  303437 cri.go:89] found id: ""
	I1210 07:12:02.733241  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.733251  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:02.733260  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:02.733271  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:02.791544  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:02.791580  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:02.805689  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:02.805716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:02.873516  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:02.865755   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.866425   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868077   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868380   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.870008   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:02.865755   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.866425   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868077   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868380   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.870008   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:02.873536  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:02.873548  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:02.898899  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:02.898932  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:05.445135  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:05.455827  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:05.455898  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:05.481329  303437 cri.go:89] found id: ""
	I1210 07:12:05.481352  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.481363  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:05.481370  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:05.481428  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:05.507339  303437 cri.go:89] found id: ""
	I1210 07:12:05.507362  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.507371  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:05.507378  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:05.507444  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:05.531971  303437 cri.go:89] found id: ""
	I1210 07:12:05.531995  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.532004  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:05.532010  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:05.532074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:05.563046  303437 cri.go:89] found id: ""
	I1210 07:12:05.563069  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.563078  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:05.563084  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:05.563147  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:05.587778  303437 cri.go:89] found id: ""
	I1210 07:12:05.587801  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.587810  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:05.587816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:05.587874  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:05.611952  303437 cri.go:89] found id: ""
	I1210 07:12:05.611973  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.611982  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:05.611988  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:05.612047  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:05.636683  303437 cri.go:89] found id: ""
	I1210 07:12:05.636705  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.636715  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:05.636721  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:05.636781  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:05.674580  303437 cri.go:89] found id: ""
	I1210 07:12:05.674609  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.674619  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:05.674628  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:05.674640  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:05.690150  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:05.690176  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:05.761058  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:05.753113   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.753757   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.755406   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.756072   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.757763   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:05.753113   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.753757   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.755406   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.756072   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.757763   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:05.761078  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:05.761090  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:05.786479  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:05.786515  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:05.814400  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:05.814426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:08.372748  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:08.382940  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:08.383032  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:08.406822  303437 cri.go:89] found id: ""
	I1210 07:12:08.406851  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.406860  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:08.406867  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:08.406931  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:08.431746  303437 cri.go:89] found id: ""
	I1210 07:12:08.431775  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.431786  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:08.431795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:08.431857  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:08.456129  303437 cri.go:89] found id: ""
	I1210 07:12:08.456152  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.456161  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:08.456167  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:08.456226  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:08.481945  303437 cri.go:89] found id: ""
	I1210 07:12:08.481981  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.481990  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:08.481997  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:08.482070  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:08.511057  303437 cri.go:89] found id: ""
	I1210 07:12:08.511080  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.511089  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:08.511095  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:08.511165  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:08.537072  303437 cri.go:89] found id: ""
	I1210 07:12:08.537094  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.537106  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:08.537113  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:08.537188  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:08.562930  303437 cri.go:89] found id: ""
	I1210 07:12:08.562961  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.562970  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:08.562992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:08.563116  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:08.587421  303437 cri.go:89] found id: ""
	I1210 07:12:08.587446  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.587455  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:08.587464  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:08.587501  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:08.646970  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:08.647003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:08.661398  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:08.661426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:08.746222  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:08.738312   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.739170   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.740871   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.741378   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.742946   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:08.738312   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.739170   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.740871   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.741378   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.742946   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:08.746254  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:08.746267  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:08.772476  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:08.772510  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:11.303459  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:11.315726  303437 out.go:203] 
	W1210 07:12:11.316890  303437 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:12:11.316924  303437 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:12:11.316933  303437 out.go:285] * Related issues:
	W1210 07:12:11.316946  303437 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:12:11.316957  303437 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:12:11.318146  303437 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229542174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229558412Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229590757Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229604525Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229613715Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229623348Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229633022Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229642441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229657818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229687390Z" level=info msg="Connect containerd service"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229958744Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.230529901Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.250111138Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.250206229Z" level=info msg="Start recovering state"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.250507327Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.251405174Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273418724Z" level=info msg="Start event monitor"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273477383Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273488378Z" level=info msg="Start streaming server"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273499069Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273508768Z" level=info msg="runtime interface starting up..."
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273515496Z" level=info msg="starting plugins..."
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273546668Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273837124Z" level=info msg="containerd successfully booted in 0.065786s"
	Dec 10 07:06:07 newest-cni-168808 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:21.219467   13817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:21.220100   13817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:21.221695   13817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:21.222003   13817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:21.223671   13817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:12:21 up  1:54,  0 user,  load average: 0.44, 0.50, 1.05
	Linux newest-cni-168808 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:12:16 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:16 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:17 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:18 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:18 newest-cni-168808 kubelet[13669]: E1210 07:12:18.826565   13669 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:18 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:18 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:19 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 10 07:12:19 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:19 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:19 newest-cni-168808 kubelet[13715]: E1210 07:12:19.725769   13715 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:19 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:19 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:20 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 10 07:12:20 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:20 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:20 newest-cni-168808 kubelet[13726]: E1210 07:12:20.468830   13726 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:20 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:20 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:21 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 10 07:12:21 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:21 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:21 newest-cni-168808 kubelet[13821]: E1210 07:12:21.231241   13821 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:21 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:21 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808: exit status 2 (352.268108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-168808" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-168808
helpers_test.go:244: (dbg) docker inspect newest-cni-168808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3",
	        "Created": "2025-12-10T06:55:56.205654512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303574,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:06:01.504514541Z",
	            "FinishedAt": "2025-12-10T07:05:59.862084086Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/hosts",
	        "LogPath": "/var/lib/docker/containers/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3/7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3-json.log",
	        "Name": "/newest-cni-168808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-168808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-168808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d1db3aa80a5128ed11ce07ba2f73640b40d5d1640b1632ed997aefc39309cf3",
	                "LowerDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb81f1d0765f9a39103deeed1e48fc3f87043d0c4c7aa4af45512756cff3762c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-168808",
	                "Source": "/var/lib/docker/volumes/newest-cni-168808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-168808",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-168808",
	                "name.minikube.sigs.k8s.io": "newest-cni-168808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "515b233ea68ef1c9ed300584d10d72421aa77f4775a69279a293bdf725b2e113",
	            "SandboxKey": "/var/run/docker/netns/515b233ea68e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-168808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:e3:f7:16:bb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fedd4ad26097ebf6757101ef8e22a141acd4ba740aa95d5f1eab7ffc232007f5",
	                    "EndpointID": "058f1c535f16248f59aad5f1fc5aceccd4ce55e84235161b803daa93fdc8a70f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-168808",
	                        "7d1db3aa80a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808: exit status 2 (319.230952ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-168808 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-168808 logs -n 25: (1.553614696s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p embed-certs-451123                                                                                                                                                                                                                                    │ embed-certs-451123           │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ delete  │ -p disable-driver-mounts-595993                                                                                                                                                                                                                          │ disable-driver-mounts-595993 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:53 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ stop    │ -p default-k8s-diff-port-395269 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-395269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:54 UTC │
	│ start   │ -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:54 UTC │ 10 Dec 25 06:55 UTC │
	│ image   │ default-k8s-diff-port-395269 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ pause   │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ unpause │ -p default-k8s-diff-port-395269 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ delete  │ -p default-k8s-diff-port-395269                                                                                                                                                                                                                          │ default-k8s-diff-port-395269 │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │ 10 Dec 25 06:55 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 06:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-320236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 06:58 UTC │                     │
	│ stop    │ -p no-preload-320236 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ addons  │ enable dashboard -p no-preload-320236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │ 10 Dec 25 07:00 UTC │
	│ start   │ -p no-preload-320236 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-320236            │ jenkins │ v1.37.0 │ 10 Dec 25 07:00 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-168808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:04 UTC │                     │
	│ stop    │ -p newest-cni-168808 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:05 UTC │ 10 Dec 25 07:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-168808 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:06 UTC │ 10 Dec 25 07:06 UTC │
	│ start   │ -p newest-cni-168808 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:06 UTC │                     │
	│ image   │ newest-cni-168808 image list --format=json                                                                                                                                                                                                               │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:12 UTC │ 10 Dec 25 07:12 UTC │
	│ pause   │ -p newest-cni-168808 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:12 UTC │ 10 Dec 25 07:12 UTC │
	│ unpause │ -p newest-cni-168808 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-168808            │ jenkins │ v1.37.0 │ 10 Dec 25 07:12 UTC │ 10 Dec 25 07:12 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:06:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:06:00.999721  303437 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:06:00.999928  303437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:06:00.999941  303437 out.go:374] Setting ErrFile to fd 2...
	I1210 07:06:00.999948  303437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:06:01.000291  303437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 07:06:01.000840  303437 out.go:368] Setting JSON to false
	I1210 07:06:01.001958  303437 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6511,"bootTime":1765343850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:06:01.002049  303437 start.go:143] virtualization:  
	I1210 07:06:01.005229  303437 out.go:179] * [newest-cni-168808] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:06:01.009127  303437 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:06:01.009191  303437 notify.go:221] Checking for updates...
	I1210 07:06:01.015115  303437 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:06:01.018047  303437 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:01.021396  303437 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 07:06:01.024347  303437 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:06:01.027298  303437 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:06:01.030670  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:01.031359  303437 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:06:01.059280  303437 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:06:01.059409  303437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:06:01.117784  303437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:06:01.1083965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:06:01.117913  303437 docker.go:319] overlay module found
	I1210 07:06:01.121244  303437 out.go:179] * Using the docker driver based on existing profile
	I1210 07:06:01.124129  303437 start.go:309] selected driver: docker
	I1210 07:06:01.124152  303437 start.go:927] validating driver "docker" against &{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:01.124257  303437 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:06:01.124971  303437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:06:01.177684  303437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:06:01.168448125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:06:01.178039  303437 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:06:01.178072  303437 cni.go:84] Creating CNI manager for ""
	I1210 07:06:01.178124  303437 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:06:01.178165  303437 start.go:353] cluster config:
	{Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:01.183109  303437 out.go:179] * Starting "newest-cni-168808" primary control-plane node in "newest-cni-168808" cluster
	I1210 07:06:01.185906  303437 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:06:01.188882  303437 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:06:01.191653  303437 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:06:01.191725  303437 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:06:01.211624  303437 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:06:01.211647  303437 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:06:01.245655  303437 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 07:06:01.410333  303437 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 07:06:01.410482  303437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 07:06:01.410710  303437 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:06:01.410741  303437 start.go:360] acquireMachinesLock for newest-cni-168808: {Name:mk3e9e7ddd89f37944cb01c45725514e76e5ba82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:01.410794  303437 start.go:364] duration metric: took 32.001µs to acquireMachinesLock for "newest-cni-168808"
	I1210 07:06:01.410811  303437 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:06:01.410817  303437 fix.go:54] fixHost starting: 
	I1210 07:06:01.411108  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:01.411381  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.445269  303437 fix.go:112] recreateIfNeeded on newest-cni-168808: state=Stopped err=<nil>
	W1210 07:06:01.445299  303437 fix.go:138] unexpected machine state, will restart: <nil>
	W1210 07:05:57.413623  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:05:59.413665  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:01.448589  303437 out.go:252] * Restarting existing docker container for "newest-cni-168808" ...
	I1210 07:06:01.448678  303437 cli_runner.go:164] Run: docker start newest-cni-168808
	I1210 07:06:01.609744  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.770299  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:01.790186  303437 kic.go:430] container "newest-cni-168808" state is running.
	I1210 07:06:01.790574  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:01.816467  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:01.816783  303437 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/config.json ...
	I1210 07:06:01.816990  303437 machine.go:94] provisionDockerMachine start ...
	I1210 07:06:01.817053  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:01.864829  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:01.865171  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:01.865181  303437 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:06:01.865918  303437 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:06:02.031349  303437 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031449  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:06:02.031458  303437 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.682µs
	I1210 07:06:02.031466  303437 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:06:02.031488  303437 cache.go:107] acquiring lock: {Name:mk1a4f41c955f4e3437c2cb5db1509c33f5c30eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031520  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:06:02.031525  303437 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 49.765µs
	I1210 07:06:02.031536  303437 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031546  303437 cache.go:107] acquiring lock: {Name:mk00b6417057e6ee2f4fd49898b631e3add2a30a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031572  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:06:02.031577  303437 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 32µs
	I1210 07:06:02.031583  303437 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031592  303437 cache.go:107] acquiring lock: {Name:mk37e12e3648f325b992e6c7b8dad857a3b77f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031616  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:06:02.031621  303437 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 30.351µs
	I1210 07:06:02.031626  303437 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031635  303437 cache.go:107] acquiring lock: {Name:mk82ac8f1ee255db98f92509c725ff0e0ce7bb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031658  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1210 07:06:02.031663  303437 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 29.047µs
	I1210 07:06:02.031668  303437 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:06:02.031676  303437 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031702  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:06:02.031711  303437 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.042µs
	I1210 07:06:02.031716  303437 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:06:02.031725  303437 cache.go:107] acquiring lock: {Name:mkfb58a7b847ded5f09e2094f5807a55f6cb50f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031752  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1210 07:06:02.031757  303437 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 32.509µs
	I1210 07:06:02.031762  303437 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1210 07:06:02.031770  303437 cache.go:107] acquiring lock: {Name:mk5cf56486d568323bbb8601591699fb489463db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:06:02.031794  303437 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1210 07:06:02.031799  303437 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.973µs
	I1210 07:06:02.031809  303437 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1210 07:06:02.031817  303437 cache.go:87] Successfully saved all images to host disk.
	I1210 07:06:05.019038  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 07:06:05.019065  303437 ubuntu.go:182] provisioning hostname "newest-cni-168808"
	I1210 07:06:05.019142  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.038167  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:05.038497  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:05.038514  303437 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-168808 && echo "newest-cni-168808" | sudo tee /etc/hostname
	I1210 07:06:05.212495  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-168808
	
	I1210 07:06:05.212574  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.236676  303437 main.go:143] libmachine: Using SSH client type: native
	I1210 07:06:05.236997  303437 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1210 07:06:05.237020  303437 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-168808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-168808/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-168808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:06:05.387591  303437 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:06:05.387661  303437 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 07:06:05.387701  303437 ubuntu.go:190] setting up certificates
	I1210 07:06:05.387718  303437 provision.go:84] configureAuth start
	I1210 07:06:05.387781  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:05.406720  303437 provision.go:143] copyHostCerts
	I1210 07:06:05.406812  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 07:06:05.406827  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 07:06:05.406903  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 07:06:05.407068  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 07:06:05.407080  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 07:06:05.407115  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 07:06:05.409257  303437 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 07:06:05.409288  303437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 07:06:05.409367  303437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 07:06:05.409470  303437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.newest-cni-168808 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-168808]
	I1210 07:06:05.457283  303437 provision.go:177] copyRemoteCerts
	I1210 07:06:05.457369  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:06:05.457416  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.474754  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.578879  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:06:05.596686  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:06:05.614316  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:06:05.632529  303437 provision.go:87] duration metric: took 244.787433ms to configureAuth
	I1210 07:06:05.632557  303437 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:06:05.632770  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:05.632780  303437 machine.go:97] duration metric: took 3.815782677s to provisionDockerMachine
	I1210 07:06:05.632794  303437 start.go:293] postStartSetup for "newest-cni-168808" (driver="docker")
	I1210 07:06:05.632814  303437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:06:05.632866  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:06:05.632909  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.651511  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.755084  303437 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:06:05.758541  303437 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:06:05.758569  303437 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:06:05.758581  303437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 07:06:05.758636  303437 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 07:06:05.758716  303437 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 07:06:05.758818  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:06:05.766638  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:06:05.784153  303437 start.go:296] duration metric: took 151.337167ms for postStartSetup
	I1210 07:06:05.784245  303437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:06:05.784296  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.801680  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.903956  303437 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:06:05.910414  303437 fix.go:56] duration metric: took 4.499590898s for fixHost
	I1210 07:06:05.910487  303437 start.go:83] releasing machines lock for "newest-cni-168808", held for 4.499684126s
	I1210 07:06:05.910597  303437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-168808
	I1210 07:06:05.931294  303437 ssh_runner.go:195] Run: cat /version.json
	I1210 07:06:05.931352  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.933029  303437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:06:05.933104  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:05.966773  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:05.968660  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	W1210 07:06:01.914114  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:04.412714  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:06.413234  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:06.164421  303437 ssh_runner.go:195] Run: systemctl --version
	I1210 07:06:06.170684  303437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:06:06.174920  303437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:06:06.174984  303437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:06:06.182557  303437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:06:06.182578  303437 start.go:496] detecting cgroup driver to use...
	I1210 07:06:06.182611  303437 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:06:06.182660  303437 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:06:06.200334  303437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:06:06.213740  303437 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:06:06.213811  303437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:06:06.229308  303437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:06:06.242262  303437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:06:06.362603  303437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:06:06.483045  303437 docker.go:234] disabling docker service ...
	I1210 07:06:06.483112  303437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:06:06.498250  303437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:06:06.511747  303437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:06:06.628460  303437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:06:06.766872  303437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:06:06.779978  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:06:06.794352  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:06.943808  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:06:06.954116  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:06:06.962677  303437 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:06:06.962740  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:06:06.971255  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:06:06.980030  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:06:06.988476  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:06:07.007850  303437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:06:07.016475  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:06:07.025456  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:06:07.034855  303437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:06:07.044266  303437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:06:07.052503  303437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:06:07.060278  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:07.175410  303437 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:06:07.276715  303437 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:06:07.276786  303437 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:06:07.280624  303437 start.go:564] Will wait 60s for crictl version
	I1210 07:06:07.280687  303437 ssh_runner.go:195] Run: which crictl
	I1210 07:06:07.284270  303437 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:06:07.312279  303437 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:06:07.312345  303437 ssh_runner.go:195] Run: containerd --version
	I1210 07:06:07.332603  303437 ssh_runner.go:195] Run: containerd --version
	I1210 07:06:07.358017  303437 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1210 07:06:07.360940  303437 cli_runner.go:164] Run: docker network inspect newest-cni-168808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:06:07.377362  303437 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:06:07.381128  303437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:06:07.393654  303437 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:06:07.396326  303437 kubeadm.go:884] updating cluster {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:06:07.396576  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.559787  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.709730  303437 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
	I1210 07:06:07.859001  303437 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1210 07:06:07.859128  303437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:06:07.883821  303437 containerd.go:627] all images are preloaded for containerd runtime.
	I1210 07:06:07.883846  303437 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:06:07.883855  303437 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1210 07:06:07.883958  303437 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-168808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:06:07.884031  303437 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:06:07.913929  303437 cni.go:84] Creating CNI manager for ""
	I1210 07:06:07.913952  303437 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 07:06:07.913973  303437 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:06:07.913999  303437 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-168808 NodeName:newest-cni-168808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:06:07.914120  303437 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-168808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:06:07.914189  303437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:06:07.921856  303437 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:06:07.921924  303437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:06:07.929166  303437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1210 07:06:07.941324  303437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:06:07.954047  303437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1210 07:06:07.966208  303437 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:06:07.969747  303437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:06:07.979238  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:08.094271  303437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:06:08.111901  303437 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808 for IP: 192.168.76.2
	I1210 07:06:08.111935  303437 certs.go:195] generating shared ca certs ...
	I1210 07:06:08.111952  303437 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.112156  303437 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 07:06:08.112239  303437 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 07:06:08.112261  303437 certs.go:257] generating profile certs ...
	I1210 07:06:08.112411  303437 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/client.key
	I1210 07:06:08.112508  303437 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key.38f3d6eb
	I1210 07:06:08.112594  303437 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key
	I1210 07:06:08.112776  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 07:06:08.112825  303437 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 07:06:08.112863  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:06:08.112899  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:06:08.112950  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:06:08.112979  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 07:06:08.113053  303437 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:06:08.113737  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:06:08.131868  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:06:08.149347  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:06:08.173211  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:06:08.201112  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:06:08.217931  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:06:08.234927  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:06:08.255525  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/newest-cni-168808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:06:08.274117  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 07:06:08.291924  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:06:08.309223  303437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 07:06:08.326082  303437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:06:08.338602  303437 ssh_runner.go:195] Run: openssl version
	I1210 07:06:08.345277  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.353152  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:06:08.360717  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.364534  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.364612  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:06:08.406623  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:06:08.414672  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.422361  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 07:06:08.430022  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.433878  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.433973  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 07:06:08.475572  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:06:08.483285  303437 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.491000  303437 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 07:06:08.498512  303437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.502241  303437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.502306  303437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 07:06:08.543558  303437 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:06:08.551469  303437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:06:08.555461  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:06:08.597134  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:06:08.638002  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:06:08.678965  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:06:08.720427  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:06:08.763492  303437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:06:08.809518  303437 kubeadm.go:401] StartCluster: {Name:newest-cni-168808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-168808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:06:08.809633  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:06:08.809696  303437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:06:08.836487  303437 cri.go:89] found id: ""
	I1210 07:06:08.836609  303437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:06:08.844505  303437 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:06:08.844525  303437 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:06:08.844604  303437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:06:08.852026  303437 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:06:08.852667  303437 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-168808" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:08.852944  303437 kubeconfig.go:62] /home/jenkins/minikube-integration/22094-2307/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-168808" cluster setting kubeconfig missing "newest-cni-168808" context setting]
	I1210 07:06:08.853395  303437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.854743  303437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:06:08.863687  303437 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 07:06:08.863719  303437 kubeadm.go:602] duration metric: took 19.187765ms to restartPrimaryControlPlane
	I1210 07:06:08.863729  303437 kubeadm.go:403] duration metric: took 54.219605ms to StartCluster
	I1210 07:06:08.863764  303437 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.863854  303437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:06:08.864943  303437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:06:08.865201  303437 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:06:08.865553  303437 config.go:182] Loaded profile config "newest-cni-168808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:06:08.865626  303437 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:06:08.865710  303437 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-168808"
	I1210 07:06:08.865725  303437 addons.go:70] Setting dashboard=true in profile "newest-cni-168808"
	I1210 07:06:08.865738  303437 addons.go:70] Setting default-storageclass=true in profile "newest-cni-168808"
	I1210 07:06:08.865748  303437 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-168808"
	I1210 07:06:08.865755  303437 addons.go:239] Setting addon dashboard=true in "newest-cni-168808"
	W1210 07:06:08.865763  303437 addons.go:248] addon dashboard should already be in state true
	I1210 07:06:08.865787  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.866234  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.865732  303437 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-168808"
	I1210 07:06:08.866264  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.866892  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.866245  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.870618  303437 out.go:179] * Verifying Kubernetes components...
	I1210 07:06:08.877218  303437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:06:08.909365  303437 addons.go:239] Setting addon default-storageclass=true in "newest-cni-168808"
	I1210 07:06:08.909422  303437 host.go:66] Checking if "newest-cni-168808" exists ...
	I1210 07:06:08.909955  303437 cli_runner.go:164] Run: docker container inspect newest-cni-168808 --format={{.State.Status}}
	I1210 07:06:08.935168  303437 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:06:08.938081  303437 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:06:08.938245  303437 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:06:08.941690  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:06:08.941720  303437 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:06:08.941756  303437 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:08.941772  303437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:06:08.941809  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:08.941835  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:08.974920  303437 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:08.974945  303437 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:06:08.975007  303437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-168808
	I1210 07:06:09.018425  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.019111  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.028670  303437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/newest-cni-168808/id_rsa Username:docker}
	I1210 07:06:09.182128  303437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:06:09.189848  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:09.218621  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:06:09.218696  303437 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:06:09.233237  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:09.248580  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:06:09.248655  303437 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:06:09.280152  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:06:09.280225  303437 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:06:09.294171  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:06:09.294239  303437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:06:09.308986  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:06:09.309057  303437 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:06:09.323118  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:06:09.323195  303437 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:06:09.337212  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:06:09.337284  303437 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:06:09.351939  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:06:09.352006  303437 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:06:09.364684  303437 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:09.364749  303437 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:06:09.377472  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:09.912036  303437 api_server.go:52] waiting for apiserver process to appear ...
	W1210 07:06:09.912102  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912165  303437 retry.go:31] will retry after 137.554553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:09.912180  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912239  303437 retry.go:31] will retry after 162.08127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912111  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:09.912371  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:09.912391  303437 retry.go:31] will retry after 156.096194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.049986  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:10.068682  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:10.075250  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:10.139495  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.139526  303437 retry.go:31] will retry after 525.238587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.196161  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.196246  303437 retry.go:31] will retry after 422.355289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.196206  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.196316  303437 retry.go:31] will retry after 388.387448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.412254  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:10.585608  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:10.619095  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:10.648889  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.648984  303437 retry.go:31] will retry after 452.281973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.665111  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:10.718838  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.718922  303437 retry.go:31] will retry after 323.626302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:10.751170  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.751201  303437 retry.go:31] will retry after 426.205037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:10.912296  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:08.413486  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:10.912684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:11.043189  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:11.101706  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:11.108011  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.108097  303437 retry.go:31] will retry after 465.500211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:11.171627  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.171733  303437 retry.go:31] will retry after 644.635053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.177835  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:11.248736  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.248773  303437 retry.go:31] will retry after 646.277835ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.413044  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:11.574386  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:11.635719  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.635755  303437 retry.go:31] will retry after 992.827501ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.816838  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:11.874310  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.874341  303437 retry.go:31] will retry after 847.092889ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.895446  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:11.912890  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:11.979233  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:11.979274  303437 retry.go:31] will retry after 1.723803171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.412929  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:12.629708  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:12.711328  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.711402  303437 retry.go:31] will retry after 1.682909305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.721580  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:12.787715  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.787755  303437 retry.go:31] will retry after 1.523563907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:12.912980  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:13.412270  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:13.704137  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:13.769291  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:13.769319  303437 retry.go:31] will retry after 2.655752177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:13.912604  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:14.312036  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:14.379977  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.380010  303437 retry.go:31] will retry after 2.120509482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.395420  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:14.412979  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:14.494970  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.495005  303437 retry.go:31] will retry after 2.083776468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:14.913027  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:15.412429  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:15.912376  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:12.913304  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:15.412868  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:16.412255  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:16.425325  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:16.500296  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.500325  303437 retry.go:31] will retry after 1.753545178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.501400  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:16.562473  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.562506  303437 retry.go:31] will retry after 5.63085781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.579894  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:16.640721  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.640756  303437 retry.go:31] will retry after 2.710169887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:16.912245  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:17.412350  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:17.913142  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:18.254741  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:18.317147  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:18.317176  303437 retry.go:31] will retry after 6.057763532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:18.412287  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:18.912752  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:19.352062  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:06:19.412870  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:19.413382  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:19.413410  303437 retry.go:31] will retry after 6.763226999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:19.913016  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:20.412997  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:20.913098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:17.413684  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:19.913294  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:21.412278  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:21.913122  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:22.194391  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:22.251091  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:22.251123  303437 retry.go:31] will retry after 9.11395006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:22.412163  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:22.912351  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:23.412284  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:23.913156  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:24.375236  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:24.412827  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:24.440293  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:24.440322  303437 retry.go:31] will retry after 9.4401753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:24.912889  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:25.412233  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:25.912307  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:21.913508  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:23.913605  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:26.413204  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:26.177306  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:26.250932  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:26.250965  303437 retry.go:31] will retry after 5.997165797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:26.412268  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:26.912230  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:27.412900  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:27.912402  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:28.412186  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:28.912521  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:29.412227  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:29.912255  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:30.413237  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:30.912254  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:28.413461  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:30.913644  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:31.366162  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:31.412559  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:31.439835  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:31.439865  303437 retry.go:31] will retry after 9.181638872s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:31.912411  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:32.248486  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:32.313416  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:32.313450  303437 retry.go:31] will retry after 9.93876945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:32.412880  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:32.912746  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:33.412590  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:33.880694  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:06:33.912312  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:33.964338  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:33.964372  303437 retry.go:31] will retry after 6.698338092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:34.413098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:34.912991  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:35.413188  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:35.912404  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:06:33.413489  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:35.913510  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	W1210 07:06:38.413592  296020 node_ready.go:55] error getting node "no-preload-320236" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-320236": dial tcp 192.168.85.2:8443: connect: connection refused
	I1210 07:06:40.413124  296020 node_ready.go:38] duration metric: took 6m0.00088218s for node "no-preload-320236" to be "Ready" ...
	I1210 07:06:40.416430  296020 out.go:203] 
	W1210 07:06:40.419386  296020 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:06:40.419405  296020 out.go:285] * 
	W1210 07:06:40.421537  296020 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:06:40.424792  296020 out.go:203] 
	I1210 07:06:36.412320  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:36.912280  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:37.412192  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:37.912490  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:38.412402  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:38.912902  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:39.412781  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:39.912868  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:40.413057  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:40.621960  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:06:40.663144  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:40.779058  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.779095  303437 retry.go:31] will retry after 16.870406936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:06:40.830377  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.830410  303437 retry.go:31] will retry after 13.844749205s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:40.912652  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:41.412296  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:41.912802  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:42.252520  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:06:42.323589  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:42.323630  303437 retry.go:31] will retry after 27.422515535s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:42.412805  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:42.912953  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:43.412903  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:43.912754  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:44.412272  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:44.912265  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:45.412790  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:45.912791  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:46.413202  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:46.912321  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:47.412292  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:47.912507  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:48.412885  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:48.912342  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:49.413070  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:49.912837  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:50.412236  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:50.912907  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:51.412287  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:51.913181  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:52.412208  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:52.912275  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:53.412923  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:53.912230  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:54.412280  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:54.676234  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:06:54.749679  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:54.749717  303437 retry.go:31] will retry after 32.358913109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:54.913072  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:55.412886  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:55.913073  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:56.412961  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:56.912198  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:57.412942  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:57.649751  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:06:57.723910  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:57.723937  303437 retry.go:31] will retry after 19.76255611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:06:57.912185  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:58.412253  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:58.912817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:59.412285  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:06:59.912592  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:00.412249  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:00.912270  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:01.412382  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:01.912282  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:02.412190  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:02.912865  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:03.412818  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:03.912286  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:04.412820  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:04.913148  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:05.412411  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:05.912250  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:06.412297  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:06.913174  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:07.412239  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:07.912324  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:08.412210  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:08.912197  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:08.912278  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:08.940273  303437 cri.go:89] found id: ""
	I1210 07:07:08.940300  303437 logs.go:282] 0 containers: []
	W1210 07:07:08.940309  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:08.940316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:08.940374  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:08.976821  303437 cri.go:89] found id: ""
	I1210 07:07:08.976848  303437 logs.go:282] 0 containers: []
	W1210 07:07:08.976857  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:08.976863  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:08.976928  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:09.004516  303437 cri.go:89] found id: ""
	I1210 07:07:09.004546  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.004555  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:09.004561  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:09.004633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:09.029569  303437 cri.go:89] found id: ""
	I1210 07:07:09.029593  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.029602  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:09.029609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:09.029666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:09.055232  303437 cri.go:89] found id: ""
	I1210 07:07:09.055256  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.055265  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:09.055281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:09.055342  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:09.080957  303437 cri.go:89] found id: ""
	I1210 07:07:09.080978  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.080986  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:09.080992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:09.081051  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:09.105491  303437 cri.go:89] found id: ""
	I1210 07:07:09.105561  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.105583  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:09.105603  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:09.105682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:09.129839  303437 cri.go:89] found id: ""
	I1210 07:07:09.129861  303437 logs.go:282] 0 containers: []
	W1210 07:07:09.129870  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:09.129879  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:09.129890  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:09.157418  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:09.157444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:09.218619  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:09.218655  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:09.233569  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:09.233598  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:09.299933  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:09.291172    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.291836    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.293591    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.295225    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.296375    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:09.291172    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.291836    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.293591    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.295225    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:09.296375    1841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:09.299954  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:09.299968  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:09.746365  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:07:09.810849  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:09.810882  303437 retry.go:31] will retry after 38.106772232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:11.825038  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:11.835407  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:11.835491  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:11.859384  303437 cri.go:89] found id: ""
	I1210 07:07:11.859407  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.859416  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:11.859422  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:11.859482  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:11.883645  303437 cri.go:89] found id: ""
	I1210 07:07:11.883667  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.883677  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:11.883683  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:11.883746  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:11.912907  303437 cri.go:89] found id: ""
	I1210 07:07:11.912987  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.913010  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:11.913029  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:11.913135  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:11.954332  303437 cri.go:89] found id: ""
	I1210 07:07:11.954354  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.954363  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:11.954369  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:11.954447  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:11.987932  303437 cri.go:89] found id: ""
	I1210 07:07:11.988008  303437 logs.go:282] 0 containers: []
	W1210 07:07:11.988024  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:11.988048  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:11.988134  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:12.016019  303437 cri.go:89] found id: ""
	I1210 07:07:12.016043  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.016052  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:12.016059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:12.016161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:12.041574  303437 cri.go:89] found id: ""
	I1210 07:07:12.041616  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.041625  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:12.041633  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:12.041702  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:12.067242  303437 cri.go:89] found id: ""
	I1210 07:07:12.067309  303437 logs.go:282] 0 containers: []
	W1210 07:07:12.067335  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:12.067351  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:12.067368  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:12.080423  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:12.080492  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:12.142902  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:12.135099    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.135777    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.137430    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.138066    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.139703    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:12.135099    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.135777    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.137430    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.138066    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:12.139703    1941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:12.142926  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:12.142940  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:12.170013  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:12.170095  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:12.205843  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:12.205871  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:14.769151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:14.779543  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:14.779628  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:14.804854  303437 cri.go:89] found id: ""
	I1210 07:07:14.804877  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.804885  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:14.804892  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:14.804951  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:14.829499  303437 cri.go:89] found id: ""
	I1210 07:07:14.829521  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.829529  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:14.829535  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:14.829592  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:14.857960  303437 cri.go:89] found id: ""
	I1210 07:07:14.857984  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.857993  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:14.858000  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:14.858058  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:14.882942  303437 cri.go:89] found id: ""
	I1210 07:07:14.882964  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.882972  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:14.882978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:14.883074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:14.906556  303437 cri.go:89] found id: ""
	I1210 07:07:14.906582  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.906591  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:14.906598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:14.906653  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:14.944744  303437 cri.go:89] found id: ""
	I1210 07:07:14.944771  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.944780  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:14.944796  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:14.944859  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:14.974225  303437 cri.go:89] found id: ""
	I1210 07:07:14.974248  303437 logs.go:282] 0 containers: []
	W1210 07:07:14.974256  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:14.974263  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:14.974323  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:15.005431  303437 cri.go:89] found id: ""
	I1210 07:07:15.005515  303437 logs.go:282] 0 containers: []
	W1210 07:07:15.005539  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:15.005564  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:15.005607  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:15.075329  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:15.067284    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.067872    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069470    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069869    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.071556    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:15.067284    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.067872    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069470    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.069869    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:15.071556    2050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:15.075363  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:15.075376  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:15.100635  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:15.100670  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:15.129987  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:15.130013  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:15.198219  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:15.198300  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:17.487235  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:07:17.543553  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:17.543587  303437 retry.go:31] will retry after 31.69876155s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:17.712834  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:17.723193  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:17.723262  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:17.747430  303437 cri.go:89] found id: ""
	I1210 07:07:17.747453  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.747462  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:17.747468  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:17.747525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:17.771960  303437 cri.go:89] found id: ""
	I1210 07:07:17.771982  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.771990  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:17.771996  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:17.772060  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:17.796155  303437 cri.go:89] found id: ""
	I1210 07:07:17.796176  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.796184  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:17.796190  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:17.796251  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:17.825359  303437 cri.go:89] found id: ""
	I1210 07:07:17.825385  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.825394  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:17.825401  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:17.825462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:17.853147  303437 cri.go:89] found id: ""
	I1210 07:07:17.853170  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.853178  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:17.853184  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:17.853243  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:17.878806  303437 cri.go:89] found id: ""
	I1210 07:07:17.878830  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.878839  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:17.878846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:17.878905  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:17.902975  303437 cri.go:89] found id: ""
	I1210 07:07:17.902999  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.903007  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:17.903037  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:17.903112  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:17.934568  303437 cri.go:89] found id: ""
	I1210 07:07:17.934592  303437 logs.go:282] 0 containers: []
	W1210 07:07:17.934600  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:17.934610  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:17.934621  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:17.999695  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:17.999740  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:18.029219  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:18.029256  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:18.094199  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:18.085818    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.086373    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.088284    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.089006    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.090767    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:18.085818    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.086373    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.088284    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.089006    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:18.090767    2174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:18.094223  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:18.094238  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:18.120245  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:18.120283  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:20.649514  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:20.661165  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:20.661236  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:20.686549  303437 cri.go:89] found id: ""
	I1210 07:07:20.686572  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.686581  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:20.686587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:20.686654  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:20.711873  303437 cri.go:89] found id: ""
	I1210 07:07:20.711895  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.711903  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:20.711910  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:20.711968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:20.736261  303437 cri.go:89] found id: ""
	I1210 07:07:20.736283  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.736292  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:20.736298  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:20.736360  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:20.765759  303437 cri.go:89] found id: ""
	I1210 07:07:20.765781  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.765797  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:20.765804  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:20.765862  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:20.793639  303437 cri.go:89] found id: ""
	I1210 07:07:20.793661  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.793669  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:20.793675  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:20.793751  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:20.818318  303437 cri.go:89] found id: ""
	I1210 07:07:20.818339  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.818347  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:20.818354  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:20.818417  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:20.843499  303437 cri.go:89] found id: ""
	I1210 07:07:20.843523  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.843533  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:20.843539  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:20.843598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:20.868745  303437 cri.go:89] found id: ""
	I1210 07:07:20.868768  303437 logs.go:282] 0 containers: []
	W1210 07:07:20.868776  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:20.868785  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:20.868796  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:20.897905  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:20.897981  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:20.962576  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:20.962654  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:20.977746  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:20.977835  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:21.045052  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:21.037239    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.037840    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.039622    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.040051    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.041574    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:21.037239    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.037840    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.039622    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.040051    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:21.041574    2296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:21.045073  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:21.045085  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:23.570777  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:23.580946  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:23.581021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:23.605355  303437 cri.go:89] found id: ""
	I1210 07:07:23.605379  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.605388  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:23.605394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:23.605451  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:23.632675  303437 cri.go:89] found id: ""
	I1210 07:07:23.632697  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.632706  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:23.632713  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:23.632783  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:23.656579  303437 cri.go:89] found id: ""
	I1210 07:07:23.656602  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.656610  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:23.656617  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:23.656675  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:23.684796  303437 cri.go:89] found id: ""
	I1210 07:07:23.684816  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.684825  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:23.684832  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:23.684893  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:23.709043  303437 cri.go:89] found id: ""
	I1210 07:07:23.709064  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.709073  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:23.709079  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:23.709149  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:23.733315  303437 cri.go:89] found id: ""
	I1210 07:07:23.733340  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.733348  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:23.733355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:23.733413  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:23.761492  303437 cri.go:89] found id: ""
	I1210 07:07:23.761514  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.761524  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:23.761530  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:23.761586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:23.786489  303437 cri.go:89] found id: ""
	I1210 07:07:23.786511  303437 logs.go:282] 0 containers: []
	W1210 07:07:23.786520  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:23.786530  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:23.786540  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:23.812193  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:23.812231  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:23.842956  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:23.842990  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:23.898018  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:23.898052  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:23.912477  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:23.912507  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:23.996757  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:23.988502    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.989251    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.990734    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.991360    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.993170    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:23.988502    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.989251    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.990734    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.991360    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:23.993170    2414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:26.497835  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:26.508472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:26.508547  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:26.533241  303437 cri.go:89] found id: ""
	I1210 07:07:26.533264  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.533272  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:26.533279  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:26.533337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:26.558844  303437 cri.go:89] found id: ""
	I1210 07:07:26.558868  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.558877  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:26.558883  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:26.558941  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:26.584008  303437 cri.go:89] found id: ""
	I1210 07:07:26.584042  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.584051  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:26.584058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:26.584176  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:26.609123  303437 cri.go:89] found id: ""
	I1210 07:07:26.609145  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.609153  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:26.609160  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:26.609220  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:26.633105  303437 cri.go:89] found id: ""
	I1210 07:07:26.633127  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.633136  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:26.633142  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:26.633220  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:26.662834  303437 cri.go:89] found id: ""
	I1210 07:07:26.662858  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.662875  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:26.662897  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:26.662989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:26.688296  303437 cri.go:89] found id: ""
	I1210 07:07:26.688318  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.688326  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:26.688332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:26.688401  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:26.714475  303437 cri.go:89] found id: ""
	I1210 07:07:26.714545  303437 logs.go:282] 0 containers: []
	W1210 07:07:26.714564  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:26.714595  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:26.714609  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:26.769794  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:26.769827  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:26.782871  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:26.782909  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:26.843846  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:26.836227    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.836952    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838581    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838893    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.840461    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:26.836227    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.836952    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838581    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.838893    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:26.840461    2513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:26.843867  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:26.843881  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:26.869319  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:26.869353  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:27.109532  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:07:27.174544  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:27.174590  303437 retry.go:31] will retry after 31.997742819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:07:29.396194  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:29.406428  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:29.406498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:29.433424  303437 cri.go:89] found id: ""
	I1210 07:07:29.433455  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.433465  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:29.433471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:29.433536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:29.463589  303437 cri.go:89] found id: ""
	I1210 07:07:29.463615  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.463624  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:29.463630  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:29.463686  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:29.492343  303437 cri.go:89] found id: ""
	I1210 07:07:29.492365  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.492374  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:29.492380  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:29.492437  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:29.516069  303437 cri.go:89] found id: ""
	I1210 07:07:29.516097  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.516106  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:29.516113  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:29.516171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:29.539661  303437 cri.go:89] found id: ""
	I1210 07:07:29.539693  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.539703  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:29.539712  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:29.539781  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:29.563791  303437 cri.go:89] found id: ""
	I1210 07:07:29.563814  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.563823  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:29.563829  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:29.563887  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:29.589136  303437 cri.go:89] found id: ""
	I1210 07:07:29.589160  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.589168  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:29.589175  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:29.589233  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:29.614701  303437 cri.go:89] found id: ""
	I1210 07:07:29.614724  303437 logs.go:282] 0 containers: []
	W1210 07:07:29.614734  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:29.614743  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:29.614756  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:29.670207  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:29.670240  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:29.683977  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:29.684005  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:29.748039  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:29.739856    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.740576    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742311    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742892    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.744661    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:29.739856    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.740576    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742311    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.742892    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:29.744661    2633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:29.748061  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:29.748077  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:29.772992  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:29.773024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:32.300508  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:32.310795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:32.310865  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:32.334361  303437 cri.go:89] found id: ""
	I1210 07:07:32.334387  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.334396  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:32.334403  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:32.334478  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:32.361534  303437 cri.go:89] found id: ""
	I1210 07:07:32.361627  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.361651  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:32.361681  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:32.361764  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:32.386488  303437 cri.go:89] found id: ""
	I1210 07:07:32.386513  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.386521  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:32.386528  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:32.386588  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:32.415239  303437 cri.go:89] found id: ""
	I1210 07:07:32.415265  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.415274  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:32.415280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:32.415340  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:32.443074  303437 cri.go:89] found id: ""
	I1210 07:07:32.443097  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.443105  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:32.443111  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:32.443170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:32.477593  303437 cri.go:89] found id: ""
	I1210 07:07:32.477620  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.477629  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:32.477636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:32.477693  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:32.502550  303437 cri.go:89] found id: ""
	I1210 07:07:32.502575  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.502584  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:32.502590  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:32.502666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:32.527562  303437 cri.go:89] found id: ""
	I1210 07:07:32.527585  303437 logs.go:282] 0 containers: []
	W1210 07:07:32.527606  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:32.527616  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:32.527632  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:32.588732  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:32.581238    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.581723    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583416    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583864    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.585321    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:32.581238    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.581723    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583416    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.583864    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:32.585321    2739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:32.588755  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:32.588767  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:32.614322  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:32.614354  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:32.642747  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:32.642777  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:32.697541  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:32.697576  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:35.211281  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:35.221258  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:35.221336  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:35.253168  303437 cri.go:89] found id: ""
	I1210 07:07:35.253193  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.253203  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:35.253210  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:35.253268  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:35.281234  303437 cri.go:89] found id: ""
	I1210 07:07:35.281257  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.281267  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:35.281273  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:35.281333  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:35.310530  303437 cri.go:89] found id: ""
	I1210 07:07:35.310554  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.310563  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:35.310570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:35.310627  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:35.334764  303437 cri.go:89] found id: ""
	I1210 07:07:35.334792  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.334801  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:35.334813  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:35.334870  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:35.361502  303437 cri.go:89] found id: ""
	I1210 07:07:35.361525  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.361534  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:35.361540  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:35.361607  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:35.389058  303437 cri.go:89] found id: ""
	I1210 07:07:35.389080  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.389089  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:35.389095  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:35.389154  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:35.425176  303437 cri.go:89] found id: ""
	I1210 07:07:35.425215  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.425226  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:35.425232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:35.425299  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:35.453052  303437 cri.go:89] found id: ""
	I1210 07:07:35.453079  303437 logs.go:282] 0 containers: []
	W1210 07:07:35.453088  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:35.453097  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:35.453108  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:35.522148  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:35.513319    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.513889    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.516065    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517374    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517853    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:35.513319    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.513889    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.516065    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517374    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:35.517853    2852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:35.522174  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:35.522186  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:35.547665  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:35.547698  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:35.575564  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:35.575596  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:35.634362  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:35.634400  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:38.149569  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:38.160486  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:38.160568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:38.201222  303437 cri.go:89] found id: ""
	I1210 07:07:38.201245  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.201253  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:38.201260  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:38.201317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:38.237151  303437 cri.go:89] found id: ""
	I1210 07:07:38.237174  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.237183  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:38.237189  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:38.237259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:38.262732  303437 cri.go:89] found id: ""
	I1210 07:07:38.262760  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.262770  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:38.262777  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:38.262835  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:38.293247  303437 cri.go:89] found id: ""
	I1210 07:07:38.293273  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.293283  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:38.293290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:38.293351  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:38.317818  303437 cri.go:89] found id: ""
	I1210 07:07:38.317840  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.317849  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:38.317855  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:38.317911  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:38.342419  303437 cri.go:89] found id: ""
	I1210 07:07:38.342447  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.342465  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:38.342473  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:38.342545  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:38.367206  303437 cri.go:89] found id: ""
	I1210 07:07:38.367271  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.367295  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:38.367316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:38.367408  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:38.395595  303437 cri.go:89] found id: ""
	I1210 07:07:38.395617  303437 logs.go:282] 0 containers: []
	W1210 07:07:38.395626  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:38.395635  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:38.395646  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:38.455465  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:38.455496  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:38.469974  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:38.470052  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:38.534901  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:38.526759    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.527529    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529111    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529677    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.531428    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:38.526759    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.527529    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529111    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.529677    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:38.531428    2976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:38.534975  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:38.535033  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:38.560101  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:38.560133  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:41.091155  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:41.101359  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:41.101439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:41.124928  303437 cri.go:89] found id: ""
	I1210 07:07:41.124950  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.124958  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:41.124964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:41.125021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:41.150502  303437 cri.go:89] found id: ""
	I1210 07:07:41.150525  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.150534  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:41.150541  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:41.150597  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:41.175254  303437 cri.go:89] found id: ""
	I1210 07:07:41.175280  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.175289  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:41.175295  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:41.175355  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:41.213279  303437 cri.go:89] found id: ""
	I1210 07:07:41.213302  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.213311  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:41.213317  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:41.213376  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:41.241895  303437 cri.go:89] found id: ""
	I1210 07:07:41.241922  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.241931  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:41.241938  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:41.241997  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:41.266233  303437 cri.go:89] found id: ""
	I1210 07:07:41.266259  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.266274  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:41.266280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:41.266375  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:41.295481  303437 cri.go:89] found id: ""
	I1210 07:07:41.295503  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.295512  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:41.295519  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:41.295586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:41.325350  303437 cri.go:89] found id: ""
	I1210 07:07:41.325372  303437 logs.go:282] 0 containers: []
	W1210 07:07:41.325381  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:41.325390  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:41.325402  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:41.381086  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:41.381121  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:41.394364  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:41.394411  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:41.475813  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:41.467819    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.468574    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.470350    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.471004    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.472517    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:41.467819    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.468574    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.470350    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.471004    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:41.472517    3083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:41.475836  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:41.475849  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:41.500717  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:41.500751  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:44.031462  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:44.042099  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:44.042173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:44.066643  303437 cri.go:89] found id: ""
	I1210 07:07:44.066674  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.066683  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:44.066689  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:44.066752  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:44.091511  303437 cri.go:89] found id: ""
	I1210 07:07:44.091533  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.091542  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:44.091548  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:44.091627  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:44.116433  303437 cri.go:89] found id: ""
	I1210 07:07:44.116455  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.116464  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:44.116470  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:44.116527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:44.141546  303437 cri.go:89] found id: ""
	I1210 07:07:44.141568  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.141576  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:44.141583  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:44.141659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:44.183580  303437 cri.go:89] found id: ""
	I1210 07:07:44.183602  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.183610  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:44.183616  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:44.183673  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:44.214628  303437 cri.go:89] found id: ""
	I1210 07:07:44.214651  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.214659  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:44.214666  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:44.214738  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:44.241699  303437 cri.go:89] found id: ""
	I1210 07:07:44.241721  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.241729  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:44.241736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:44.241805  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:44.266706  303437 cri.go:89] found id: ""
	I1210 07:07:44.266729  303437 logs.go:282] 0 containers: []
	W1210 07:07:44.266737  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:44.266746  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:44.266758  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:44.321835  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:44.321867  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:44.335089  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:44.335120  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:44.395294  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:44.387779    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.388344    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389371    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389875    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.391491    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:44.387779    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.388344    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389371    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.389875    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:44.391491    3199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:44.395360  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:44.395388  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:44.425916  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:44.425956  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:46.965660  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:46.976149  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:46.976221  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:47.003597  303437 cri.go:89] found id: ""
	I1210 07:07:47.003620  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.003629  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:47.003636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:47.003709  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:47.028196  303437 cri.go:89] found id: ""
	I1210 07:07:47.028218  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.028226  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:47.028232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:47.028290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:47.056800  303437 cri.go:89] found id: ""
	I1210 07:07:47.056824  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.056833  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:47.056840  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:47.056916  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:47.081593  303437 cri.go:89] found id: ""
	I1210 07:07:47.081656  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.081678  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:47.081697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:47.081767  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:47.110385  303437 cri.go:89] found id: ""
	I1210 07:07:47.110451  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.110474  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:47.110492  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:47.110563  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:47.136398  303437 cri.go:89] found id: ""
	I1210 07:07:47.136465  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.136490  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:47.136503  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:47.136576  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:47.162521  303437 cri.go:89] found id: ""
	I1210 07:07:47.162545  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.162554  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:47.162560  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:47.162617  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:47.200031  303437 cri.go:89] found id: ""
	I1210 07:07:47.200052  303437 logs.go:282] 0 containers: []
	W1210 07:07:47.200060  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:47.200069  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:47.200080  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:47.240172  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:47.240197  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:47.295589  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:47.295625  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:47.308817  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:47.308843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:47.373455  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:47.365473    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.366342    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.367939    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.368531    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.370138    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:47.365473    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.366342    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.367939    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.368531    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:47.370138    3327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:47.373479  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:47.373504  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:47.918542  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:07:48.000256  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:48.000468  303437 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:49.243254  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:07:49.300794  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:49.300885  303437 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:49.898427  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:49.908683  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:49.908754  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:49.934109  303437 cri.go:89] found id: ""
	I1210 07:07:49.934136  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.934145  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:49.934152  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:49.934214  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:49.959202  303437 cri.go:89] found id: ""
	I1210 07:07:49.959226  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.959235  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:49.959252  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:49.959329  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:49.983331  303437 cri.go:89] found id: ""
	I1210 07:07:49.983356  303437 logs.go:282] 0 containers: []
	W1210 07:07:49.983364  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:49.983371  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:49.983427  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:50.012230  303437 cri.go:89] found id: ""
	I1210 07:07:50.012265  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.012274  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:50.012281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:50.012350  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:50.039851  303437 cri.go:89] found id: ""
	I1210 07:07:50.039880  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.039889  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:50.039895  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:50.039962  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:50.071162  303437 cri.go:89] found id: ""
	I1210 07:07:50.071186  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.071195  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:50.071201  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:50.071265  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:50.097095  303437 cri.go:89] found id: ""
	I1210 07:07:50.097118  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.097127  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:50.097134  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:50.097198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:50.121941  303437 cri.go:89] found id: ""
	I1210 07:07:50.121966  303437 logs.go:282] 0 containers: []
	W1210 07:07:50.121976  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:50.121985  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:50.121998  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:50.178251  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:50.178286  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:50.195455  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:50.195491  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:50.283052  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:50.274829    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.275404    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277130    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277736    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.279468    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:50.274829    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.275404    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277130    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.277736    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:50.279468    3442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:50.283077  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:50.283098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:50.309433  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:50.309472  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:52.837493  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:52.848301  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:52.848370  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:52.872661  303437 cri.go:89] found id: ""
	I1210 07:07:52.872682  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.872690  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:52.872696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:52.872755  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:52.895064  303437 cri.go:89] found id: ""
	I1210 07:07:52.895090  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.895100  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:52.895112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:52.895170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:52.918926  303437 cri.go:89] found id: ""
	I1210 07:07:52.918950  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.918958  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:52.918964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:52.919038  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:52.942801  303437 cri.go:89] found id: ""
	I1210 07:07:52.942823  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.942831  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:52.942838  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:52.942895  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:52.968885  303437 cri.go:89] found id: ""
	I1210 07:07:52.968910  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.968919  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:52.968925  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:52.968984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:52.992050  303437 cri.go:89] found id: ""
	I1210 07:07:52.992072  303437 logs.go:282] 0 containers: []
	W1210 07:07:52.992080  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:52.992087  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:52.992145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:53.020481  303437 cri.go:89] found id: ""
	I1210 07:07:53.020507  303437 logs.go:282] 0 containers: []
	W1210 07:07:53.020516  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:53.020523  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:53.020586  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:53.045391  303437 cri.go:89] found id: ""
	I1210 07:07:53.045412  303437 logs.go:282] 0 containers: []
	W1210 07:07:53.045421  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:53.045430  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:53.045441  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:53.100408  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:53.100444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:53.115165  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:53.115192  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:53.192011  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:53.181167    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.181972    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.182929    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.185331    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.186084    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:53.181167    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.181972    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.182929    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.185331    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:53.186084    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:53.192034  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:53.192049  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:53.220495  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:53.220572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:55.749081  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:55.759242  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:55.759314  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:55.782656  303437 cri.go:89] found id: ""
	I1210 07:07:55.782681  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.782690  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:55.782707  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:55.782766  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:55.807483  303437 cri.go:89] found id: ""
	I1210 07:07:55.807509  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.807527  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:55.807534  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:55.807595  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:55.832851  303437 cri.go:89] found id: ""
	I1210 07:07:55.832887  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.832896  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:55.832906  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:55.832966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:55.857553  303437 cri.go:89] found id: ""
	I1210 07:07:55.857575  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.857584  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:55.857591  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:55.857653  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:55.885207  303437 cri.go:89] found id: ""
	I1210 07:07:55.885230  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.885240  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:55.885246  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:55.885315  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:55.909296  303437 cri.go:89] found id: ""
	I1210 07:07:55.909322  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.909332  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:55.909340  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:55.909398  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:55.933701  303437 cri.go:89] found id: ""
	I1210 07:07:55.933723  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.933733  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:55.933740  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:55.933812  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:55.958095  303437 cri.go:89] found id: ""
	I1210 07:07:55.958121  303437 logs.go:282] 0 containers: []
	W1210 07:07:55.958130  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:55.958139  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:55.958150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:56.028949  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:56.021322    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.021920    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023068    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023441    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.025194    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:56.021322    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.021920    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023068    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.023441    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:56.025194    3663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:56.028976  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:56.029046  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:56.055269  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:56.055308  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:56.087408  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:56.087438  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:56.143537  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:56.143570  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:58.657737  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:07:58.669685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:07:58.669751  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:07:58.704925  303437 cri.go:89] found id: ""
	I1210 07:07:58.704947  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.704955  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:07:58.704962  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:07:58.705021  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:07:58.732775  303437 cri.go:89] found id: ""
	I1210 07:07:58.732798  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.732806  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:07:58.732812  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:07:58.732871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:07:58.757863  303437 cri.go:89] found id: ""
	I1210 07:07:58.757885  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.757893  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:07:58.757899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:07:58.757957  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:07:58.782893  303437 cri.go:89] found id: ""
	I1210 07:07:58.782914  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.782923  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:07:58.782929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:07:58.782987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:07:58.813425  303437 cri.go:89] found id: ""
	I1210 07:07:58.813458  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.813467  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:07:58.813474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:07:58.813531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:07:58.837894  303437 cri.go:89] found id: ""
	I1210 07:07:58.837920  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.837930  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:07:58.837937  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:07:58.837994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:07:58.862767  303437 cri.go:89] found id: ""
	I1210 07:07:58.862793  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.862803  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:07:58.862810  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:07:58.862871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:07:58.887161  303437 cri.go:89] found id: ""
	I1210 07:07:58.887190  303437 logs.go:282] 0 containers: []
	W1210 07:07:58.887203  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:07:58.887213  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:07:58.887226  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:07:58.912742  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:07:58.912774  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:07:58.941751  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:07:58.941778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:07:58.997499  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:07:58.997538  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:07:59.012690  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:07:59.012716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:07:59.079032  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:07:59.071332    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.071853    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.073549    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.074116    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.075663    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:07:59.071332    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.071853    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.073549    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.074116    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:07:59.075663    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:07:59.173255  303437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:07:59.241772  303437 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:07:59.241906  303437 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:07:59.245162  303437 out.go:179] * Enabled addons: 
	I1210 07:07:59.248019  303437 addons.go:530] duration metric: took 1m50.382393488s for enable addons: enabled=[]
	I1210 07:08:01.579277  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:01.590395  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:01.590469  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:01.616988  303437 cri.go:89] found id: ""
	I1210 07:08:01.617017  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.617025  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:01.617032  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:01.617095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:01.643533  303437 cri.go:89] found id: ""
	I1210 07:08:01.643555  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.643563  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:01.643570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:01.643633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:01.683402  303437 cri.go:89] found id: ""
	I1210 07:08:01.683430  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.683439  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:01.683446  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:01.683507  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:01.714420  303437 cri.go:89] found id: ""
	I1210 07:08:01.714448  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.714457  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:01.714463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:01.714522  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:01.741588  303437 cri.go:89] found id: ""
	I1210 07:08:01.741614  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.741625  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:01.741632  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:01.741697  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:01.766133  303437 cri.go:89] found id: ""
	I1210 07:08:01.766163  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.766172  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:01.766178  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:01.766246  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:01.796151  303437 cri.go:89] found id: ""
	I1210 07:08:01.796173  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.796181  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:01.796188  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:01.796253  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:01.821826  303437 cri.go:89] found id: ""
	I1210 07:08:01.821848  303437 logs.go:282] 0 containers: []
	W1210 07:08:01.821857  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:01.821872  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:01.821883  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:01.856135  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:01.856162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:01.912548  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:01.912582  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:01.926252  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:01.926279  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:01.989471  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:01.981372    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.982086    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.983797    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.984477    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.986161    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:01.981372    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.982086    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.983797    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.984477    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:01.986161    3915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:01.989491  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:01.989504  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:04.519169  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:04.529774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:04.529853  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:04.557926  303437 cri.go:89] found id: ""
	I1210 07:08:04.557950  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.557967  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:04.557988  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:04.558067  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:04.585171  303437 cri.go:89] found id: ""
	I1210 07:08:04.585195  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.585204  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:04.585223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:04.585292  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:04.613695  303437 cri.go:89] found id: ""
	I1210 07:08:04.613720  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.613729  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:04.613735  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:04.613808  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:04.637775  303437 cri.go:89] found id: ""
	I1210 07:08:04.637859  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.637880  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:04.637899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:04.637989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:04.673966  303437 cri.go:89] found id: ""
	I1210 07:08:04.674033  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.674057  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:04.674073  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:04.674161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:04.706760  303437 cri.go:89] found id: ""
	I1210 07:08:04.706825  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.706846  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:04.706865  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:04.706955  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:04.748640  303437 cri.go:89] found id: ""
	I1210 07:08:04.748707  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.748731  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:04.748749  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:04.748837  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:04.778179  303437 cri.go:89] found id: ""
	I1210 07:08:04.778241  303437 logs.go:282] 0 containers: []
	W1210 07:08:04.778263  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:04.778283  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:04.778324  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:04.838994  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:04.839038  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:04.852663  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:04.852737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:04.919247  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:04.910928    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.911685    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913313    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913950    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.915537    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:04.910928    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.911685    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913313    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.913950    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:04.915537    4017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:04.919311  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:04.919346  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:04.944409  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:04.944441  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:07.475233  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:07.485817  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:07.485889  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:07.510450  303437 cri.go:89] found id: ""
	I1210 07:08:07.510473  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.510482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:07.510488  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:07.510549  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:07.536516  303437 cri.go:89] found id: ""
	I1210 07:08:07.536541  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.536550  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:07.536556  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:07.536646  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:07.561868  303437 cri.go:89] found id: ""
	I1210 07:08:07.561893  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.561902  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:07.561908  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:07.561987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:07.590197  303437 cri.go:89] found id: ""
	I1210 07:08:07.590221  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.590230  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:07.590236  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:07.590342  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:07.613514  303437 cri.go:89] found id: ""
	I1210 07:08:07.613539  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.613548  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:07.613555  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:07.613662  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:07.638377  303437 cri.go:89] found id: ""
	I1210 07:08:07.638402  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.638410  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:07.638417  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:07.638477  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:07.667985  303437 cri.go:89] found id: ""
	I1210 07:08:07.668058  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.668082  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:07.668102  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:07.668189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:07.698530  303437 cri.go:89] found id: ""
	I1210 07:08:07.698605  303437 logs.go:282] 0 containers: []
	W1210 07:08:07.698647  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:07.698671  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:07.698710  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:07.761708  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:07.761745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:07.775951  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:07.775978  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:07.842158  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:07.833714    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.834583    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836355    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836801    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.838463    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:07.833714    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.834583    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836355    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.836801    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:07.838463    4129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:07.842183  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:07.842200  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:07.868656  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:07.868693  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:10.398249  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:10.410905  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:10.410974  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:10.441450  303437 cri.go:89] found id: ""
	I1210 07:08:10.441474  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.441482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:10.441489  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:10.441551  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:10.467324  303437 cri.go:89] found id: ""
	I1210 07:08:10.467345  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.467354  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:10.467360  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:10.467422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:10.490980  303437 cri.go:89] found id: ""
	I1210 07:08:10.491001  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.491117  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:10.491125  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:10.491186  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:10.515608  303437 cri.go:89] found id: ""
	I1210 07:08:10.515673  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.515688  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:10.515696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:10.515753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:10.540198  303437 cri.go:89] found id: ""
	I1210 07:08:10.540223  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.540232  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:10.540246  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:10.540304  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:10.565060  303437 cri.go:89] found id: ""
	I1210 07:08:10.565125  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.565140  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:10.565155  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:10.565219  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:10.593396  303437 cri.go:89] found id: ""
	I1210 07:08:10.593430  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.593438  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:10.593445  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:10.593510  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:10.617363  303437 cri.go:89] found id: ""
	I1210 07:08:10.617395  303437 logs.go:282] 0 containers: []
	W1210 07:08:10.617405  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:10.617414  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:10.617426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:10.677240  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:10.677317  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:10.692150  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:10.692220  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:10.758835  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:10.750046    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.750802    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.753608    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.754055    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.755555    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:10.750046    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.750802    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.753608    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.754055    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:10.755555    4238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:10.758906  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:10.758934  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:10.783900  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:10.783935  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:13.316158  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:13.326768  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:13.326841  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:13.354375  303437 cri.go:89] found id: ""
	I1210 07:08:13.354402  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.354411  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:13.354417  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:13.354486  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:13.379439  303437 cri.go:89] found id: ""
	I1210 07:08:13.379467  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.379479  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:13.379491  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:13.379572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:13.406403  303437 cri.go:89] found id: ""
	I1210 07:08:13.406425  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.406433  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:13.406439  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:13.406498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:13.441528  303437 cri.go:89] found id: ""
	I1210 07:08:13.441633  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.441665  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:13.441698  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:13.441887  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:13.485367  303437 cri.go:89] found id: ""
	I1210 07:08:13.485407  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.485416  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:13.485423  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:13.485491  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:13.515544  303437 cri.go:89] found id: ""
	I1210 07:08:13.515572  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.515582  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:13.515588  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:13.515646  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:13.541572  303437 cri.go:89] found id: ""
	I1210 07:08:13.541604  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.541613  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:13.541620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:13.541692  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:13.566335  303437 cri.go:89] found id: ""
	I1210 07:08:13.566366  303437 logs.go:282] 0 containers: []
	W1210 07:08:13.566376  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:13.566385  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:13.566396  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:13.622359  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:13.622391  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:13.635632  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:13.635661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:13.716667  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:13.707886    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.708774    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.710652    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.711407    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.713108    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:13.707886    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.708774    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.710652    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.711407    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:13.713108    4342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:13.716691  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:13.716711  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:13.743967  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:13.744002  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:16.273094  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:16.283420  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:16.283488  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:16.307336  303437 cri.go:89] found id: ""
	I1210 07:08:16.307358  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.307366  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:16.307373  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:16.307430  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:16.330448  303437 cri.go:89] found id: ""
	I1210 07:08:16.330476  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.330485  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:16.330492  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:16.330552  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:16.362050  303437 cri.go:89] found id: ""
	I1210 07:08:16.362080  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.362089  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:16.362096  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:16.362172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:16.385708  303437 cri.go:89] found id: ""
	I1210 07:08:16.385732  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.385741  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:16.385747  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:16.385852  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:16.421398  303437 cri.go:89] found id: ""
	I1210 07:08:16.421427  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.421436  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:16.421442  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:16.421509  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:16.449046  303437 cri.go:89] found id: ""
	I1210 07:08:16.449074  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.449082  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:16.449089  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:16.449166  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:16.475499  303437 cri.go:89] found id: ""
	I1210 07:08:16.475525  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.475534  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:16.475541  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:16.475619  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:16.502476  303437 cri.go:89] found id: ""
	I1210 07:08:16.502506  303437 logs.go:282] 0 containers: []
	W1210 07:08:16.502515  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:16.502524  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:16.502535  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:16.530854  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:16.530929  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:16.586993  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:16.587030  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:16.600337  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:16.600364  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:16.669775  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:16.660570    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.661341    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.663229    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.664010    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.665709    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:16.660570    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.661341    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.663229    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.664010    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:16.665709    4465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:16.669849  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:16.669875  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:19.199141  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:19.209670  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:19.209739  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:19.242748  303437 cri.go:89] found id: ""
	I1210 07:08:19.242775  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.242784  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:19.242791  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:19.242849  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:19.266957  303437 cri.go:89] found id: ""
	I1210 07:08:19.266980  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.266989  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:19.266995  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:19.267066  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:19.293252  303437 cri.go:89] found id: ""
	I1210 07:08:19.293276  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.293285  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:19.293292  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:19.293349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:19.318070  303437 cri.go:89] found id: ""
	I1210 07:08:19.318096  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.318105  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:19.318112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:19.318171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:19.341744  303437 cri.go:89] found id: ""
	I1210 07:08:19.341769  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.341783  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:19.341789  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:19.341847  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:19.366605  303437 cri.go:89] found id: ""
	I1210 07:08:19.366632  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.366641  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:19.366648  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:19.366706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:19.393536  303437 cri.go:89] found id: ""
	I1210 07:08:19.393561  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.393570  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:19.393576  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:19.393633  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:19.422513  303437 cri.go:89] found id: ""
	I1210 07:08:19.422535  303437 logs.go:282] 0 containers: []
	W1210 07:08:19.422546  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:19.422556  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:19.422566  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:19.453046  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:19.453118  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:19.488889  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:19.488918  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:19.547224  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:19.547259  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:19.562006  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:19.562035  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:19.625530  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:19.617363    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.618299    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620102    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620530    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.622148    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:19.617363    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.618299    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620102    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.620530    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:19.622148    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:22.125860  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:22.136477  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:22.136550  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:22.164763  303437 cri.go:89] found id: ""
	I1210 07:08:22.164786  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.164795  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:22.164801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:22.164861  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:22.190879  303437 cri.go:89] found id: ""
	I1210 07:08:22.190900  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.190909  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:22.190915  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:22.190973  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:22.215247  303437 cri.go:89] found id: ""
	I1210 07:08:22.215278  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.215286  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:22.215292  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:22.215351  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:22.239059  303437 cri.go:89] found id: ""
	I1210 07:08:22.239086  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.239095  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:22.239102  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:22.239163  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:22.264259  303437 cri.go:89] found id: ""
	I1210 07:08:22.264284  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.264293  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:22.264299  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:22.264357  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:22.289890  303437 cri.go:89] found id: ""
	I1210 07:08:22.289913  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.289923  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:22.289929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:22.289987  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:22.317025  303437 cri.go:89] found id: ""
	I1210 07:08:22.317051  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.317060  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:22.317067  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:22.317124  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:22.341933  303437 cri.go:89] found id: ""
	I1210 07:08:22.341965  303437 logs.go:282] 0 containers: []
	W1210 07:08:22.341974  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:22.341992  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:22.342004  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:22.398310  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:22.398344  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:22.413479  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:22.413520  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:22.490851  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:22.482047    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.482772    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.484530    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.485118    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.486931    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:22.482047    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.482772    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.484530    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.485118    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:22.486931    4684 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:22.490873  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:22.490888  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:22.518860  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:22.518891  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:25.049142  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:25.060069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:25.060142  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:25.089203  303437 cri.go:89] found id: ""
	I1210 07:08:25.089232  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.089242  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:25.089248  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:25.089317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:25.118751  303437 cri.go:89] found id: ""
	I1210 07:08:25.118776  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.118785  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:25.118791  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:25.118848  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:25.143129  303437 cri.go:89] found id: ""
	I1210 07:08:25.143163  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.143173  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:25.143179  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:25.143240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:25.169805  303437 cri.go:89] found id: ""
	I1210 07:08:25.169830  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.169839  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:25.169846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:25.169905  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:25.194716  303437 cri.go:89] found id: ""
	I1210 07:08:25.194743  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.194752  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:25.194759  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:25.194818  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:25.221104  303437 cri.go:89] found id: ""
	I1210 07:08:25.221127  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.221135  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:25.221141  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:25.221199  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:25.249738  303437 cri.go:89] found id: ""
	I1210 07:08:25.249762  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.249771  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:25.249784  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:25.249842  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:25.273527  303437 cri.go:89] found id: ""
	I1210 07:08:25.273552  303437 logs.go:282] 0 containers: []
	W1210 07:08:25.273562  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:25.273572  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:25.273583  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:25.298962  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:25.298996  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:25.326742  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:25.326770  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:25.381274  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:25.381307  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:25.394260  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:25.394289  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:25.485635  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:25.478200    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.479077    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.480640    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.481256    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.482290    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:25.478200    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.479077    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.480640    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.481256    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:25.482290    4808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:27.987151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:28.000081  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:28.000164  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:28.025871  303437 cri.go:89] found id: ""
	I1210 07:08:28.025896  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.025904  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:28.025917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:28.025978  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:28.050799  303437 cri.go:89] found id: ""
	I1210 07:08:28.050822  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.050831  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:28.050837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:28.050902  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:28.075890  303437 cri.go:89] found id: ""
	I1210 07:08:28.075912  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.075921  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:28.075928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:28.075988  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:28.100461  303437 cri.go:89] found id: ""
	I1210 07:08:28.100483  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.100492  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:28.100499  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:28.100555  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:28.126583  303437 cri.go:89] found id: ""
	I1210 07:08:28.126607  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.126617  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:28.126623  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:28.126682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:28.156736  303437 cri.go:89] found id: ""
	I1210 07:08:28.156758  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.156767  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:28.156774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:28.156837  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:28.181562  303437 cri.go:89] found id: ""
	I1210 07:08:28.181635  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.181657  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:28.181675  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:28.181760  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:28.206007  303437 cri.go:89] found id: ""
	I1210 07:08:28.206081  303437 logs.go:282] 0 containers: []
	W1210 07:08:28.206106  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:28.206127  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:28.206163  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:28.219409  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:28.219445  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:28.285367  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:28.277994    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.278613    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280604    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.282182    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:28.277994    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.278613    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280246    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.280604    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:28.282182    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:28.285387  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:28.285399  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:28.310115  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:28.310150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:28.337400  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:28.337427  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:30.895800  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:30.906215  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:30.906285  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:30.940989  303437 cri.go:89] found id: ""
	I1210 07:08:30.941016  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.941025  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:30.941031  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:30.941089  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:30.968174  303437 cri.go:89] found id: ""
	I1210 07:08:30.968196  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.968205  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:30.968211  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:30.968267  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:30.997147  303437 cri.go:89] found id: ""
	I1210 07:08:30.997181  303437 logs.go:282] 0 containers: []
	W1210 07:08:30.997191  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:30.997198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:30.997324  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:31.027985  303437 cri.go:89] found id: ""
	I1210 07:08:31.028024  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.028033  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:31.028039  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:31.028101  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:31.052662  303437 cri.go:89] found id: ""
	I1210 07:08:31.052684  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.052693  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:31.052699  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:31.052760  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:31.078026  303437 cri.go:89] found id: ""
	I1210 07:08:31.078051  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.078060  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:31.078067  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:31.078129  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:31.106108  303437 cri.go:89] found id: ""
	I1210 07:08:31.106135  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.106144  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:31.106150  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:31.106212  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:31.133109  303437 cri.go:89] found id: ""
	I1210 07:08:31.133133  303437 logs.go:282] 0 containers: []
	W1210 07:08:31.133141  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:31.133150  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:31.133162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:31.158330  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:31.158369  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:31.190546  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:31.190570  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:31.245193  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:31.245228  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:31.258848  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:31.258882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:31.332332  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:31.324865    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.325506    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327103    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327745    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.329226    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:31.324865    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.325506    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327103    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.327745    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:31.329226    5034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:33.832563  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:33.843389  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:33.843462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:33.868588  303437 cri.go:89] found id: ""
	I1210 07:08:33.868612  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.868621  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:33.868627  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:33.868691  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:33.893467  303437 cri.go:89] found id: ""
	I1210 07:08:33.893492  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.893501  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:33.893507  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:33.893568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:33.925853  303437 cri.go:89] found id: ""
	I1210 07:08:33.925883  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.925892  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:33.925899  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:33.925961  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:33.957483  303437 cri.go:89] found id: ""
	I1210 07:08:33.957507  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.957516  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:33.957523  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:33.957582  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:33.990903  303437 cri.go:89] found id: ""
	I1210 07:08:33.990927  303437 logs.go:282] 0 containers: []
	W1210 07:08:33.990937  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:33.990943  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:33.991005  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:34.017222  303437 cri.go:89] found id: ""
	I1210 07:08:34.017249  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.017258  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:34.017264  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:34.017346  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:34.043888  303437 cri.go:89] found id: ""
	I1210 07:08:34.043913  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.043921  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:34.043928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:34.044001  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:34.069229  303437 cri.go:89] found id: ""
	I1210 07:08:34.069299  303437 logs.go:282] 0 containers: []
	W1210 07:08:34.069314  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:34.069325  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:34.069337  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:34.127059  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:34.127093  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:34.140507  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:34.140537  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:34.205618  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:34.198206    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.198939    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200406    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200898    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.202347    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:34.198206    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.198939    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200406    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.200898    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:34.202347    5131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:34.205639  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:34.205651  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:34.230228  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:34.230258  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:36.756574  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:36.768692  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:36.768761  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:36.791900  303437 cri.go:89] found id: ""
	I1210 07:08:36.791922  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.791930  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:36.791936  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:36.791994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:36.818662  303437 cri.go:89] found id: ""
	I1210 07:08:36.818683  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.818691  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:36.818697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:36.818753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:36.846695  303437 cri.go:89] found id: ""
	I1210 07:08:36.846718  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.846727  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:36.846733  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:36.846794  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:36.870384  303437 cri.go:89] found id: ""
	I1210 07:08:36.870408  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.870417  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:36.870423  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:36.870486  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:36.895312  303437 cri.go:89] found id: ""
	I1210 07:08:36.895335  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.895343  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:36.895349  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:36.895408  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:36.926574  303437 cri.go:89] found id: ""
	I1210 07:08:36.926602  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.926611  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:36.926617  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:36.926684  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:36.956760  303437 cri.go:89] found id: ""
	I1210 07:08:36.956786  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.956795  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:36.956801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:36.956864  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:36.983460  303437 cri.go:89] found id: ""
	I1210 07:08:36.983480  303437 logs.go:282] 0 containers: []
	W1210 07:08:36.983488  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:36.983497  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:36.983512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:37.039889  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:37.039926  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:37.053431  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:37.053508  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:37.117639  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:37.109822    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.110762    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.111850    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.112355    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.113887    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:37.109822    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.110762    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.111850    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.112355    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:37.113887    5245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:37.117660  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:37.117673  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:37.148315  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:37.148357  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:39.681355  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:39.695207  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:39.695290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:39.725514  303437 cri.go:89] found id: ""
	I1210 07:08:39.725547  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.725556  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:39.725563  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:39.725632  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:39.750801  303437 cri.go:89] found id: ""
	I1210 07:08:39.750834  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.750844  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:39.750850  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:39.750920  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:39.775756  303437 cri.go:89] found id: ""
	I1210 07:08:39.775779  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.775788  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:39.775794  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:39.775853  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:39.805059  303437 cri.go:89] found id: ""
	I1210 07:08:39.805085  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.805094  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:39.805100  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:39.805158  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:39.829219  303437 cri.go:89] found id: ""
	I1210 07:08:39.829284  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.829301  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:39.829309  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:39.829371  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:39.858144  303437 cri.go:89] found id: ""
	I1210 07:08:39.858168  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.858177  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:39.858184  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:39.858243  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:39.886805  303437 cri.go:89] found id: ""
	I1210 07:08:39.886838  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.886846  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:39.886853  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:39.886919  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:39.918064  303437 cri.go:89] found id: ""
	I1210 07:08:39.918089  303437 logs.go:282] 0 containers: []
	W1210 07:08:39.918099  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:39.918108  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:39.918119  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:39.982343  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:39.982418  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:39.995829  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:39.995854  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:40.078976  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:40.070049    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.070741    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.072500    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.073281    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.074971    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:40.070049    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.070741    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.072500    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.073281    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:40.074971    5355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:40.079001  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:40.079033  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:40.105734  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:40.105778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:42.635583  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:42.646316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:42.646387  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:42.687725  303437 cri.go:89] found id: ""
	I1210 07:08:42.687746  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.687755  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:42.687761  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:42.687821  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:42.731127  303437 cri.go:89] found id: ""
	I1210 07:08:42.731148  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.731157  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:42.731163  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:42.731224  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:42.761187  303437 cri.go:89] found id: ""
	I1210 07:08:42.761218  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.761227  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:42.761232  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:42.761293  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:42.789156  303437 cri.go:89] found id: ""
	I1210 07:08:42.789184  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.789193  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:42.789200  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:42.789259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:42.813508  303437 cri.go:89] found id: ""
	I1210 07:08:42.813533  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.813542  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:42.813548  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:42.813607  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:42.838567  303437 cri.go:89] found id: ""
	I1210 07:08:42.838591  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.838601  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:42.838608  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:42.838667  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:42.862315  303437 cri.go:89] found id: ""
	I1210 07:08:42.862340  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.862348  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:42.862355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:42.862415  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:42.888411  303437 cri.go:89] found id: ""
	I1210 07:08:42.888486  303437 logs.go:282] 0 containers: []
	W1210 07:08:42.888502  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:42.888513  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:42.888526  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:42.950009  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:42.950042  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:42.965591  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:42.965617  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:43.040631  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:43.032737    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.033256    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035076    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035768    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.037307    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:43.032737    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.033256    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035076    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.035768    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:43.037307    5465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:43.040653  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:43.040667  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:43.067163  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:43.067197  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:45.596845  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:45.607484  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:45.607551  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:45.631812  303437 cri.go:89] found id: ""
	I1210 07:08:45.631841  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.631851  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:45.631857  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:45.631916  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:45.656686  303437 cri.go:89] found id: ""
	I1210 07:08:45.656709  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.656717  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:45.656724  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:45.656782  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:45.705244  303437 cri.go:89] found id: ""
	I1210 07:08:45.705270  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.705279  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:45.705286  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:45.705349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:45.733649  303437 cri.go:89] found id: ""
	I1210 07:08:45.733671  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.733679  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:45.733685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:45.733748  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:45.758319  303437 cri.go:89] found id: ""
	I1210 07:08:45.758340  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.758349  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:45.758355  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:45.758416  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:45.782339  303437 cri.go:89] found id: ""
	I1210 07:08:45.782360  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.782369  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:45.782375  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:45.782434  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:45.806598  303437 cri.go:89] found id: ""
	I1210 07:08:45.806624  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.806633  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:45.806640  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:45.806700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:45.830909  303437 cri.go:89] found id: ""
	I1210 07:08:45.830933  303437 logs.go:282] 0 containers: []
	W1210 07:08:45.830942  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:45.830951  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:45.830962  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:45.859118  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:45.859148  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:45.920835  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:45.920869  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:45.935529  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:45.935555  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:46.015051  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:46.007172    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.007866    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.009596    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.010127    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.011638    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:46.007172    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.007866    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.009596    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.010127    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:46.011638    5591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:46.015073  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:46.015086  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:48.541223  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:48.551805  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:48.551874  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:48.576818  303437 cri.go:89] found id: ""
	I1210 07:08:48.576878  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.576891  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:48.576898  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:48.576963  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:48.601980  303437 cri.go:89] found id: ""
	I1210 07:08:48.602005  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.602014  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:48.602020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:48.602082  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:48.634301  303437 cri.go:89] found id: ""
	I1210 07:08:48.634324  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.634333  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:48.634339  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:48.634399  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:48.665296  303437 cri.go:89] found id: ""
	I1210 07:08:48.665321  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.665330  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:48.665336  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:48.665395  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:48.696396  303437 cri.go:89] found id: ""
	I1210 07:08:48.696421  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.696430  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:48.696437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:48.696500  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:48.732263  303437 cri.go:89] found id: ""
	I1210 07:08:48.732288  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.732297  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:48.732304  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:48.732365  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:48.759127  303437 cri.go:89] found id: ""
	I1210 07:08:48.759152  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.759161  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:48.759170  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:48.759229  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:48.783999  303437 cri.go:89] found id: ""
	I1210 07:08:48.784077  303437 logs.go:282] 0 containers: []
	W1210 07:08:48.784100  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:48.784116  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:48.784141  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:48.797102  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:48.797132  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:48.859523  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:48.852279    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.852826    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854371    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854816    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.856244    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:48.852279    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.852826    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854371    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.854816    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:48.856244    5681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:48.859546  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:48.859560  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:48.884680  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:48.884714  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:48.923070  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:48.923098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:51.485606  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:51.496059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:51.496133  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:51.521404  303437 cri.go:89] found id: ""
	I1210 07:08:51.521429  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.521438  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:51.521444  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:51.521504  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:51.546743  303437 cri.go:89] found id: ""
	I1210 07:08:51.546768  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.546777  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:51.546785  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:51.546847  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:51.577064  303437 cri.go:89] found id: ""
	I1210 07:08:51.577089  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.577099  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:51.577105  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:51.577171  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:51.602384  303437 cri.go:89] found id: ""
	I1210 07:08:51.602410  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.602420  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:51.602426  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:51.602484  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:51.630338  303437 cri.go:89] found id: ""
	I1210 07:08:51.630367  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.630375  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:51.630382  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:51.630440  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:51.660663  303437 cri.go:89] found id: ""
	I1210 07:08:51.660691  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.660700  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:51.660706  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:51.660765  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:51.689142  303437 cri.go:89] found id: ""
	I1210 07:08:51.689170  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.689179  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:51.689186  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:51.689246  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:51.723765  303437 cri.go:89] found id: ""
	I1210 07:08:51.723792  303437 logs.go:282] 0 containers: []
	W1210 07:08:51.723800  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:51.723810  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:51.723824  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:51.781842  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:51.781873  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:51.795845  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:51.795872  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:51.863519  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:51.855577    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.856333    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858048    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858719    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.860050    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:51.855577    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.856333    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858048    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.858719    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:51.860050    5795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:51.863583  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:51.863611  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:51.888478  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:51.888510  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:54.421755  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:54.432308  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:54.432377  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:54.458171  303437 cri.go:89] found id: ""
	I1210 07:08:54.458194  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.458209  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:54.458216  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:54.458279  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:54.485658  303437 cri.go:89] found id: ""
	I1210 07:08:54.485689  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.485698  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:54.485704  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:54.485763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:54.514257  303437 cri.go:89] found id: ""
	I1210 07:08:54.514279  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.514287  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:54.514294  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:54.514360  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:54.538966  303437 cri.go:89] found id: ""
	I1210 07:08:54.539053  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.539078  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:54.539096  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:54.539182  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:54.563486  303437 cri.go:89] found id: ""
	I1210 07:08:54.563512  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.563521  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:54.563528  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:54.563588  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:54.588780  303437 cri.go:89] found id: ""
	I1210 07:08:54.588805  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.588814  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:54.588827  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:54.588886  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:54.618322  303437 cri.go:89] found id: ""
	I1210 07:08:54.618346  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.618356  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:54.618362  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:54.618421  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:54.643564  303437 cri.go:89] found id: ""
	I1210 07:08:54.643592  303437 logs.go:282] 0 containers: []
	W1210 07:08:54.643602  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:54.643612  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:54.643624  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:08:54.683994  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:54.684069  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:54.743900  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:54.743934  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:54.757240  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:54.757266  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:54.820795  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:54.813522    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.813935    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.815612    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.816020    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.817550    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:54.813522    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.813935    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.815612    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.816020    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:54.817550    5920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:54.820815  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:54.820830  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:57.345608  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:57.358499  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:08:57.358625  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:08:57.384563  303437 cri.go:89] found id: ""
	I1210 07:08:57.384589  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.384598  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:08:57.384604  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:08:57.384682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:08:57.408236  303437 cri.go:89] found id: ""
	I1210 07:08:57.408263  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.408272  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:08:57.408279  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:08:57.408337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:08:57.432014  303437 cri.go:89] found id: ""
	I1210 07:08:57.432037  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.432045  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:08:57.432052  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:08:57.432111  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:08:57.455970  303437 cri.go:89] found id: ""
	I1210 07:08:57.456046  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.456068  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:08:57.456088  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:08:57.456173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:08:57.480680  303437 cri.go:89] found id: ""
	I1210 07:08:57.480752  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.480767  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:08:57.480775  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:08:57.480841  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:08:57.505993  303437 cri.go:89] found id: ""
	I1210 07:08:57.506026  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.506037  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:08:57.506043  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:08:57.506153  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:08:57.530713  303437 cri.go:89] found id: ""
	I1210 07:08:57.530739  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.530748  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:08:57.530754  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:08:57.530814  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:08:57.555806  303437 cri.go:89] found id: ""
	I1210 07:08:57.555871  303437 logs.go:282] 0 containers: []
	W1210 07:08:57.555897  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:08:57.555918  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:08:57.555943  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:08:57.611292  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:08:57.611326  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:08:57.624707  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:08:57.624735  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:08:57.707745  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:08:57.699963    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.701079    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702632    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702942    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.704373    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:08:57.699963    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.701079    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702632    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.702942    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:08:57.704373    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:08:57.707768  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:08:57.707780  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:08:57.734701  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:08:57.734734  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:00.266582  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:00.305476  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:00.305924  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:00.366724  303437 cri.go:89] found id: ""
	I1210 07:09:00.366806  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.366839  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:00.366879  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:00.366992  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:00.396827  303437 cri.go:89] found id: ""
	I1210 07:09:00.396912  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.396939  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:00.396960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:00.397064  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:00.424504  303437 cri.go:89] found id: ""
	I1210 07:09:00.424531  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.424540  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:00.424547  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:00.424609  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:00.453893  303437 cri.go:89] found id: ""
	I1210 07:09:00.453921  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.453931  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:00.453938  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:00.454001  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:00.480406  303437 cri.go:89] found id: ""
	I1210 07:09:00.480432  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.480441  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:00.480448  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:00.480508  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:00.505747  303437 cri.go:89] found id: ""
	I1210 07:09:00.505779  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.505788  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:00.505795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:00.505856  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:00.535288  303437 cri.go:89] found id: ""
	I1210 07:09:00.535311  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.535320  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:00.535326  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:00.535387  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:00.565945  303437 cri.go:89] found id: ""
	I1210 07:09:00.565972  303437 logs.go:282] 0 containers: []
	W1210 07:09:00.565989  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:00.566015  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:00.566034  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:00.596202  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:00.596228  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:00.651714  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:00.651748  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:00.666338  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:00.666375  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:00.745706  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:00.737632    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.738139    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.739647    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.740156    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.741940    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:00.737632    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.738139    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.739647    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.740156    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:00.741940    6144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:00.745728  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:00.745742  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:03.272316  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:03.283628  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:03.283695  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:03.309180  303437 cri.go:89] found id: ""
	I1210 07:09:03.309263  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.309285  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:03.309300  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:03.309373  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:03.334971  303437 cri.go:89] found id: ""
	I1210 07:09:03.334994  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.335003  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:03.335035  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:03.335096  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:03.361090  303437 cri.go:89] found id: ""
	I1210 07:09:03.361116  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.361125  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:03.361131  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:03.361189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:03.385067  303437 cri.go:89] found id: ""
	I1210 07:09:03.385141  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.385161  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:03.385169  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:03.385259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:03.420428  303437 cri.go:89] found id: ""
	I1210 07:09:03.420450  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.420459  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:03.420465  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:03.420527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:03.453131  303437 cri.go:89] found id: ""
	I1210 07:09:03.453153  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.453162  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:03.453168  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:03.453281  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:03.485206  303437 cri.go:89] found id: ""
	I1210 07:09:03.485236  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.485245  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:03.485251  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:03.485311  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:03.517204  303437 cri.go:89] found id: ""
	I1210 07:09:03.517229  303437 logs.go:282] 0 containers: []
	W1210 07:09:03.517238  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:03.517253  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:03.517265  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:03.530656  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:03.530728  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:03.596244  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:03.588660    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.589167    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.590688    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.591215    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.592799    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:03.588660    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.589167    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.590688    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.591215    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:03.592799    6245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:03.596305  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:03.596342  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:03.621847  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:03.621882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:03.649988  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:03.650024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:06.209516  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:06.219893  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:06.219970  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:06.244763  303437 cri.go:89] found id: ""
	I1210 07:09:06.244786  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.244795  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:06.244801  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:06.244862  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:06.271479  303437 cri.go:89] found id: ""
	I1210 07:09:06.271501  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.271509  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:06.271515  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:06.271572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:06.295607  303437 cri.go:89] found id: ""
	I1210 07:09:06.295635  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.295644  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:06.295651  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:06.295706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:06.320774  303437 cri.go:89] found id: ""
	I1210 07:09:06.320798  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.320806  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:06.320823  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:06.320886  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:06.349033  303437 cri.go:89] found id: ""
	I1210 07:09:06.349056  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.349064  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:06.349070  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:06.349127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:06.377330  303437 cri.go:89] found id: ""
	I1210 07:09:06.377352  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.377361  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:06.377367  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:06.377426  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:06.400983  303437 cri.go:89] found id: ""
	I1210 07:09:06.401005  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.401014  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:06.401021  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:06.401080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:06.431299  303437 cri.go:89] found id: ""
	I1210 07:09:06.431327  303437 logs.go:282] 0 containers: []
	W1210 07:09:06.431336  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:06.431345  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:06.431356  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:06.462335  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:06.462369  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:06.495348  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:06.495376  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:06.551592  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:06.551627  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:06.565270  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:06.565305  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:06.629933  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:06.621965    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.622716    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.624429    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.625124    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.626708    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:06.621965    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.622716    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.624429    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.625124    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:06.626708    6370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:09.131098  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:09.141585  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:09.141658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:09.168859  303437 cri.go:89] found id: ""
	I1210 07:09:09.168889  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.168898  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:09.168904  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:09.168966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:09.193427  303437 cri.go:89] found id: ""
	I1210 07:09:09.193448  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.193457  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:09.193463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:09.193520  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:09.217804  303437 cri.go:89] found id: ""
	I1210 07:09:09.217928  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.217954  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:09.217975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:09.218083  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:09.242204  303437 cri.go:89] found id: ""
	I1210 07:09:09.242277  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.242303  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:09.242322  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:09.242404  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:09.268889  303437 cri.go:89] found id: ""
	I1210 07:09:09.268912  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.268920  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:09.268926  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:09.268984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:09.293441  303437 cri.go:89] found id: ""
	I1210 07:09:09.293514  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.293545  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:09.293563  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:09.293671  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:09.321925  303437 cri.go:89] found id: ""
	I1210 07:09:09.321946  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.321954  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:09.321960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:09.322026  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:09.350603  303437 cri.go:89] found id: ""
	I1210 07:09:09.350623  303437 logs.go:282] 0 containers: []
	W1210 07:09:09.350631  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:09.350641  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:09.350653  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:09.363382  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:09.363409  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:09.429669  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:09.421586    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.422246    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424200    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424743    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.426494    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:09.421586    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.422246    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424200    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.424743    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:09.426494    6462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:09.429690  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:09.429702  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:09.461410  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:09.461444  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:09.500508  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:09.500536  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:12.055555  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:12.066220  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:12.066289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:12.093446  303437 cri.go:89] found id: ""
	I1210 07:09:12.093468  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.093477  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:12.093484  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:12.093543  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:12.119338  303437 cri.go:89] found id: ""
	I1210 07:09:12.119361  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.119370  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:12.119376  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:12.119436  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:12.146532  303437 cri.go:89] found id: ""
	I1210 07:09:12.146553  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.146562  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:12.146568  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:12.146623  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:12.175977  303437 cri.go:89] found id: ""
	I1210 07:09:12.175999  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.176007  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:12.176013  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:12.176072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:12.200557  303437 cri.go:89] found id: ""
	I1210 07:09:12.200579  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.200588  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:12.200595  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:12.200651  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:12.224652  303437 cri.go:89] found id: ""
	I1210 07:09:12.224674  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.224684  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:12.224690  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:12.224750  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:12.249147  303437 cri.go:89] found id: ""
	I1210 07:09:12.249171  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.249180  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:12.249187  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:12.249253  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:12.272500  303437 cri.go:89] found id: ""
	I1210 07:09:12.272535  303437 logs.go:282] 0 containers: []
	W1210 07:09:12.272543  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:12.272553  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:12.272580  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:12.328368  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:12.328399  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:12.341669  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:12.341699  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:12.401653  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:12.394790    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.395266    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396400    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396898    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.398538    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:12.394790    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.395266    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396400    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.396898    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:12.398538    6577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:12.401708  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:12.401734  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:12.431751  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:12.431791  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:14.963924  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:14.974138  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:14.974206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:15.001054  303437 cri.go:89] found id: ""
	I1210 07:09:15.001080  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.001089  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:15.001097  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:15.001170  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:15.040020  303437 cri.go:89] found id: ""
	I1210 07:09:15.040044  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.040053  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:15.040059  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:15.040121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:15.065063  303437 cri.go:89] found id: ""
	I1210 07:09:15.065086  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.065095  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:15.065101  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:15.065161  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:15.089689  303437 cri.go:89] found id: ""
	I1210 07:09:15.089714  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.089723  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:15.089729  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:15.089797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:15.117422  303437 cri.go:89] found id: ""
	I1210 07:09:15.117446  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.117455  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:15.117462  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:15.117521  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:15.143475  303437 cri.go:89] found id: ""
	I1210 07:09:15.143498  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.143507  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:15.143514  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:15.143580  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:15.168329  303437 cri.go:89] found id: ""
	I1210 07:09:15.168353  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.168363  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:15.168370  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:15.168439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:15.196848  303437 cri.go:89] found id: ""
	I1210 07:09:15.196870  303437 logs.go:282] 0 containers: []
	W1210 07:09:15.196879  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:15.196889  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:15.196901  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:15.210071  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:15.210098  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:15.270835  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:15.262938    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.263645    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265180    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265486    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.267063    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:15.262938    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.263645    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265180    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.265486    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:15.267063    6686 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:15.270858  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:15.270870  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:15.296738  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:15.296774  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:15.322760  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:15.322786  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:17.877564  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:17.887770  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:17.887840  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:17.923653  303437 cri.go:89] found id: ""
	I1210 07:09:17.923691  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.923701  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:17.923708  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:17.923789  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:17.953013  303437 cri.go:89] found id: ""
	I1210 07:09:17.953058  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.953067  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:17.953073  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:17.953155  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:17.987520  303437 cri.go:89] found id: ""
	I1210 07:09:17.987565  303437 logs.go:282] 0 containers: []
	W1210 07:09:17.987574  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:17.987587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:17.987655  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:18.017344  303437 cri.go:89] found id: ""
	I1210 07:09:18.017367  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.017378  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:18.017385  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:18.017448  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:18.043560  303437 cri.go:89] found id: ""
	I1210 07:09:18.043592  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.043602  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:18.043609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:18.043670  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:18.071253  303437 cri.go:89] found id: ""
	I1210 07:09:18.071299  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.071308  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:18.071317  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:18.071395  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:18.100328  303437 cri.go:89] found id: ""
	I1210 07:09:18.100350  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.100359  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:18.100364  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:18.100422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:18.124828  303437 cri.go:89] found id: ""
	I1210 07:09:18.124855  303437 logs.go:282] 0 containers: []
	W1210 07:09:18.124864  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:18.124873  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:18.124906  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:18.180441  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:18.180473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:18.193811  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:18.193838  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:18.254675  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:18.247379    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.248083    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.249676    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.250042    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.251523    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:18.247379    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.248083    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.249676    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.250042    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:18.251523    6797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:18.254700  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:18.254720  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:18.280133  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:18.280167  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:20.813863  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:20.824103  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:20.824175  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:20.847793  303437 cri.go:89] found id: ""
	I1210 07:09:20.847818  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.847827  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:20.847833  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:20.847896  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:20.873295  303437 cri.go:89] found id: ""
	I1210 07:09:20.873319  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.873328  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:20.873334  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:20.873394  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:20.897570  303437 cri.go:89] found id: ""
	I1210 07:09:20.897594  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.897603  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:20.897609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:20.897665  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:20.932999  303437 cri.go:89] found id: ""
	I1210 07:09:20.933025  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.933034  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:20.933041  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:20.933099  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:20.967096  303437 cri.go:89] found id: ""
	I1210 07:09:20.967123  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.967137  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:20.967143  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:20.967203  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:20.994239  303437 cri.go:89] found id: ""
	I1210 07:09:20.994265  303437 logs.go:282] 0 containers: []
	W1210 07:09:20.994274  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:20.994281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:20.994337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:21.020205  303437 cri.go:89] found id: ""
	I1210 07:09:21.020230  303437 logs.go:282] 0 containers: []
	W1210 07:09:21.020238  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:21.020245  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:21.020305  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:21.049401  303437 cri.go:89] found id: ""
	I1210 07:09:21.049427  303437 logs.go:282] 0 containers: []
	W1210 07:09:21.049436  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:21.049445  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:21.049457  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:21.062901  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:21.062926  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:21.122517  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:21.115640    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.116120    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117591    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117985    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.119485    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:21.115640    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.116120    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117591    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.117985    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:21.119485    6903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:21.122537  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:21.122550  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:21.147196  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:21.147230  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:21.177192  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:21.177221  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:23.732133  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:23.742890  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:23.742961  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:23.774220  303437 cri.go:89] found id: ""
	I1210 07:09:23.774243  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.774251  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:23.774257  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:23.774317  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:23.798816  303437 cri.go:89] found id: ""
	I1210 07:09:23.798837  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.798846  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:23.798852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:23.798910  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:23.823244  303437 cri.go:89] found id: ""
	I1210 07:09:23.823318  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.823341  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:23.823362  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:23.823453  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:23.851474  303437 cri.go:89] found id: ""
	I1210 07:09:23.851500  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.851510  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:23.851516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:23.851598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:23.876565  303437 cri.go:89] found id: ""
	I1210 07:09:23.876641  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.876665  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:23.876679  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:23.876753  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:23.901598  303437 cri.go:89] found id: ""
	I1210 07:09:23.901624  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.901632  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:23.901641  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:23.901698  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:23.939880  303437 cri.go:89] found id: ""
	I1210 07:09:23.945774  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.945837  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:23.945917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:23.946105  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:23.983936  303437 cri.go:89] found id: ""
	I1210 07:09:23.984019  303437 logs.go:282] 0 containers: []
	W1210 07:09:23.984045  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:23.984096  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:23.984128  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:24.047417  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:24.047454  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:24.060782  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:24.060808  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:24.123547  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:24.115234    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.115940    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.117664    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.118067    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.119622    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:24.115234    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.115940    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.117664    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.118067    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:24.119622    7018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:24.123570  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:24.123583  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:24.148767  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:24.148802  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:26.679138  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:26.691239  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:26.691311  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:26.720725  303437 cri.go:89] found id: ""
	I1210 07:09:26.720748  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.720756  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:26.720763  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:26.720824  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:26.745903  303437 cri.go:89] found id: ""
	I1210 07:09:26.745926  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.745935  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:26.745941  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:26.745999  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:26.771250  303437 cri.go:89] found id: ""
	I1210 07:09:26.771279  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.771289  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:26.771295  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:26.771354  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:26.795771  303437 cri.go:89] found id: ""
	I1210 07:09:26.795795  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.795804  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:26.795810  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:26.795912  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:26.820992  303437 cri.go:89] found id: ""
	I1210 07:09:26.821013  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.821023  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:26.821029  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:26.821091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:26.849537  303437 cri.go:89] found id: ""
	I1210 07:09:26.849559  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.849568  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:26.849575  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:26.849631  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:26.882245  303437 cri.go:89] found id: ""
	I1210 07:09:26.882274  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.882284  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:26.882290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:26.882354  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:26.907397  303437 cri.go:89] found id: ""
	I1210 07:09:26.907421  303437 logs.go:282] 0 containers: []
	W1210 07:09:26.907437  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:26.907446  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:26.907457  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:26.945593  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:26.945619  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:27.009478  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:27.009515  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:27.023242  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:27.023268  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:27.088362  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:27.080378    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.081206    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.082792    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.083225    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.084935    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:27.080378    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.081206    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.082792    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.083225    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:27.084935    7140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:27.088384  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:27.088396  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:29.614457  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:29.624717  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:29.624839  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:29.648905  303437 cri.go:89] found id: ""
	I1210 07:09:29.648929  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.648938  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:29.648944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:29.649031  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:29.693513  303437 cri.go:89] found id: ""
	I1210 07:09:29.693576  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.693597  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:29.693615  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:29.693703  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:29.718997  303437 cri.go:89] found id: ""
	I1210 07:09:29.719090  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.719114  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:29.719132  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:29.719215  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:29.749199  303437 cri.go:89] found id: ""
	I1210 07:09:29.749266  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.749289  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:29.749307  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:29.749402  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:29.774719  303437 cri.go:89] found id: ""
	I1210 07:09:29.774795  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.774819  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:29.774841  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:29.774931  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:29.799913  303437 cri.go:89] found id: ""
	I1210 07:09:29.799977  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.799999  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:29.800017  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:29.800095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:29.823673  303437 cri.go:89] found id: ""
	I1210 07:09:29.823747  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.823769  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:29.823787  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:29.823859  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:29.848157  303437 cri.go:89] found id: ""
	I1210 07:09:29.848188  303437 logs.go:282] 0 containers: []
	W1210 07:09:29.848198  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:29.848208  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:29.848219  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:29.876009  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:29.876037  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:29.932276  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:29.932307  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:29.949872  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:29.949898  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:30.045838  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:30.034181    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.034859    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037026    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037737    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.040198    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:30.034181    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.034859    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037026    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.037737    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:30.040198    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:30.045873  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:30.045888  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:32.576040  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:32.587217  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:32.587298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:32.613690  303437 cri.go:89] found id: ""
	I1210 07:09:32.613713  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.613722  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:32.613729  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:32.613797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:32.639153  303437 cri.go:89] found id: ""
	I1210 07:09:32.639178  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.639187  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:32.639193  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:32.639256  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:32.673727  303437 cri.go:89] found id: ""
	I1210 07:09:32.673799  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.673808  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:32.673815  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:32.673882  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:32.709195  303437 cri.go:89] found id: ""
	I1210 07:09:32.709222  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.709231  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:32.709238  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:32.709298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:32.737425  303437 cri.go:89] found id: ""
	I1210 07:09:32.737458  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.737467  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:32.737474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:32.737532  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:32.766042  303437 cri.go:89] found id: ""
	I1210 07:09:32.766069  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.766078  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:32.766086  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:32.766145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:32.791060  303437 cri.go:89] found id: ""
	I1210 07:09:32.791089  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.791098  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:32.791104  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:32.791164  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:32.815424  303437 cri.go:89] found id: ""
	I1210 07:09:32.815445  303437 logs.go:282] 0 containers: []
	W1210 07:09:32.815453  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:32.815462  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:32.815473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:32.845676  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:32.845718  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:32.877898  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:32.877927  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:32.934870  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:32.934903  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:32.950436  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:32.950516  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:33.023900  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:33.014878    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.015644    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.017336    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.018088    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.019810    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:33.014878    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.015644    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.017336    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.018088    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:33.019810    7367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:35.524178  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:35.535098  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:35.535173  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:35.563582  303437 cri.go:89] found id: ""
	I1210 07:09:35.563606  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.563614  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:35.563621  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:35.563682  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:35.589346  303437 cri.go:89] found id: ""
	I1210 07:09:35.589368  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.589377  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:35.589384  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:35.589442  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:35.613807  303437 cri.go:89] found id: ""
	I1210 07:09:35.613833  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.613841  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:35.613848  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:35.613907  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:35.643139  303437 cri.go:89] found id: ""
	I1210 07:09:35.643162  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.643172  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:35.643178  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:35.643240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:35.682597  303437 cri.go:89] found id: ""
	I1210 07:09:35.682629  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.682638  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:35.682645  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:35.682711  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:35.716718  303437 cri.go:89] found id: ""
	I1210 07:09:35.716739  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.716747  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:35.716753  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:35.716811  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:35.746357  303437 cri.go:89] found id: ""
	I1210 07:09:35.746378  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.746387  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:35.746393  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:35.746455  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:35.773219  303437 cri.go:89] found id: ""
	I1210 07:09:35.773240  303437 logs.go:282] 0 containers: []
	W1210 07:09:35.773251  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:35.773260  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:35.773273  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:35.838850  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:35.830993    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.831530    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833193    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833865    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.835553    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:35.830993    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.831530    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833193    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.833865    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:35.835553    7460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:35.838868  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:35.838882  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:35.864265  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:35.864299  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:35.892689  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:35.892716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:35.952281  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:35.952311  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:38.468021  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:38.478500  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:38.478574  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:38.505131  303437 cri.go:89] found id: ""
	I1210 07:09:38.505156  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.505174  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:38.505197  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:38.505267  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:38.529142  303437 cri.go:89] found id: ""
	I1210 07:09:38.529166  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.529175  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:38.529181  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:38.529239  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:38.554410  303437 cri.go:89] found id: ""
	I1210 07:09:38.554434  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.554442  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:38.554449  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:38.554506  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:38.581372  303437 cri.go:89] found id: ""
	I1210 07:09:38.581395  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.581403  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:38.581409  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:38.581472  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:38.606157  303437 cri.go:89] found id: ""
	I1210 07:09:38.606182  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.606191  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:38.606198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:38.606261  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:38.630691  303437 cri.go:89] found id: ""
	I1210 07:09:38.630717  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.630725  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:38.630731  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:38.630788  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:38.655423  303437 cri.go:89] found id: ""
	I1210 07:09:38.655447  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.655456  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:38.655463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:38.655524  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:38.685788  303437 cri.go:89] found id: ""
	I1210 07:09:38.685814  303437 logs.go:282] 0 containers: []
	W1210 07:09:38.685822  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:38.685832  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:38.685844  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:38.750704  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:38.750740  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:38.764389  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:38.764417  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:38.825803  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:38.818667    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.819289    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.820727    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.821107    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.822534    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:38.818667    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.819289    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.820727    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.821107    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:38.822534    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:38.825824  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:38.825836  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:38.850907  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:38.850941  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:41.382590  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:41.392996  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:41.393069  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:41.417044  303437 cri.go:89] found id: ""
	I1210 07:09:41.417069  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.417077  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:41.417083  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:41.417146  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:41.442003  303437 cri.go:89] found id: ""
	I1210 07:09:41.442077  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.442107  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:41.442127  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:41.442200  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:41.466958  303437 cri.go:89] found id: ""
	I1210 07:09:41.466985  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.466994  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:41.467000  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:41.467081  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:41.491996  303437 cri.go:89] found id: ""
	I1210 07:09:41.492018  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.492027  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:41.492033  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:41.492093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:41.517865  303437 cri.go:89] found id: ""
	I1210 07:09:41.517890  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.517908  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:41.517929  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:41.518012  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:41.544162  303437 cri.go:89] found id: ""
	I1210 07:09:41.544184  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.544193  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:41.544199  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:41.544259  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:41.573308  303437 cri.go:89] found id: ""
	I1210 07:09:41.573381  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.573404  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:41.573422  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:41.573502  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:41.602427  303437 cri.go:89] found id: ""
	I1210 07:09:41.602457  303437 logs.go:282] 0 containers: []
	W1210 07:09:41.602467  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:41.602492  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:41.602511  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:41.658769  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:41.658803  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:41.681233  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:41.681259  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:41.747373  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:41.738699    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.739334    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.741375    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.742059    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.744132    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:41.738699    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.739334    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.741375    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.742059    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:41.744132    7687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:41.747398  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:41.747411  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:41.772193  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:41.772224  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:44.302640  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:44.313058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:44.313127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:44.341886  303437 cri.go:89] found id: ""
	I1210 07:09:44.341914  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.341929  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:44.341935  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:44.341995  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:44.367439  303437 cri.go:89] found id: ""
	I1210 07:09:44.367460  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.367469  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:44.367475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:44.367532  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:44.391640  303437 cri.go:89] found id: ""
	I1210 07:09:44.391668  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.391678  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:44.391685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:44.391780  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:44.421140  303437 cri.go:89] found id: ""
	I1210 07:09:44.421169  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.421178  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:44.421185  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:44.421263  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:44.444759  303437 cri.go:89] found id: ""
	I1210 07:09:44.444783  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.444792  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:44.444798  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:44.444858  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:44.468926  303437 cri.go:89] found id: ""
	I1210 07:09:44.468959  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.468968  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:44.468978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:44.469045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:44.495556  303437 cri.go:89] found id: ""
	I1210 07:09:44.495581  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.495590  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:44.495597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:44.495676  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:44.519631  303437 cri.go:89] found id: ""
	I1210 07:09:44.519654  303437 logs.go:282] 0 containers: []
	W1210 07:09:44.519663  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:44.519672  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:44.519684  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:44.532940  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:44.532964  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:44.598861  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:44.590948    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.591655    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593344    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593846    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.595521    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:44.590948    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.591655    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593344    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.593846    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:44.595521    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:44.598921  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:44.598950  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:44.624141  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:44.624181  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:44.651186  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:44.651214  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:47.208206  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:47.218613  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:47.218695  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:47.244616  303437 cri.go:89] found id: ""
	I1210 07:09:47.244643  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.244652  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:47.244659  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:47.244717  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:47.270353  303437 cri.go:89] found id: ""
	I1210 07:09:47.270378  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.270387  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:47.270393  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:47.270469  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:47.296082  303437 cri.go:89] found id: ""
	I1210 07:09:47.296108  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.296117  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:47.296123  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:47.296181  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:47.320296  303437 cri.go:89] found id: ""
	I1210 07:09:47.320362  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.320380  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:47.320388  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:47.320459  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:47.345546  303437 cri.go:89] found id: ""
	I1210 07:09:47.345571  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.345580  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:47.345587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:47.345647  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:47.375423  303437 cri.go:89] found id: ""
	I1210 07:09:47.375458  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.375467  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:47.375475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:47.375536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:47.399857  303437 cri.go:89] found id: ""
	I1210 07:09:47.399880  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.399894  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:47.399901  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:47.399963  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:47.431984  303437 cri.go:89] found id: ""
	I1210 07:09:47.432011  303437 logs.go:282] 0 containers: []
	W1210 07:09:47.432019  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:47.432029  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:47.432060  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:47.458214  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:47.458248  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:47.490816  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:47.490843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:47.549328  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:47.549361  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:47.562826  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:47.562855  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:47.624764  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:47.617028    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.617678    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619303    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619812    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.621440    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:47.617028    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.617678    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619303    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.619812    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:47.621440    7920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:50.125980  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:50.136223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:50.136289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:50.169825  303437 cri.go:89] found id: ""
	I1210 07:09:50.169858  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.169867  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:50.169874  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:50.169966  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:50.198977  303437 cri.go:89] found id: ""
	I1210 07:09:50.199000  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.199031  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:50.199039  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:50.199095  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:50.235780  303437 cri.go:89] found id: ""
	I1210 07:09:50.235803  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.235811  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:50.235817  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:50.235875  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:50.259548  303437 cri.go:89] found id: ""
	I1210 07:09:50.259570  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.259578  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:50.259585  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:50.259641  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:50.285338  303437 cri.go:89] found id: ""
	I1210 07:09:50.285361  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.285369  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:50.285375  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:50.285432  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:50.310647  303437 cri.go:89] found id: ""
	I1210 07:09:50.310669  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.310678  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:50.310685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:50.310741  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:50.334419  303437 cri.go:89] found id: ""
	I1210 07:09:50.334448  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.334458  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:50.334464  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:50.334521  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:50.359803  303437 cri.go:89] found id: ""
	I1210 07:09:50.359827  303437 logs.go:282] 0 containers: []
	W1210 07:09:50.359837  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:50.359847  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:50.359858  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:50.384958  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:50.384994  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:50.421068  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:50.421093  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:50.477375  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:50.477409  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:50.490923  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:50.490954  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:50.556587  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:50.548374    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.549044    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.550820    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.551415    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.553008    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:50.548374    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.549044    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.550820    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.551415    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:50.553008    8029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:53.056876  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:53.067392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:53.067464  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:53.092029  303437 cri.go:89] found id: ""
	I1210 07:09:53.092052  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.092062  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:53.092068  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:53.092125  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:53.118131  303437 cri.go:89] found id: ""
	I1210 07:09:53.118156  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.118165  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:53.118172  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:53.118232  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:53.147375  303437 cri.go:89] found id: ""
	I1210 07:09:53.147398  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.147407  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:53.147413  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:53.147471  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:53.184782  303437 cri.go:89] found id: ""
	I1210 07:09:53.184801  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.184810  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:53.184816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:53.184875  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:53.211867  303437 cri.go:89] found id: ""
	I1210 07:09:53.211892  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.211901  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:53.211908  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:53.211965  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:53.237656  303437 cri.go:89] found id: ""
	I1210 07:09:53.237678  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.237686  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:53.237693  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:53.237761  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:53.262840  303437 cri.go:89] found id: ""
	I1210 07:09:53.262861  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.262870  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:53.262876  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:53.262934  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:53.287214  303437 cri.go:89] found id: ""
	I1210 07:09:53.287235  303437 logs.go:282] 0 containers: []
	W1210 07:09:53.287243  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:53.287252  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:53.287265  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:53.316241  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:53.316267  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:53.371646  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:53.371682  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:53.384755  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:53.384788  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:53.447921  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:53.440066    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.440752    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442394    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442882    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.444521    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:53.440066    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.440752    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442394    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.442882    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:53.444521    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:53.447948  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:53.447961  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:55.973173  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:55.983576  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:55.983656  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:56.011801  303437 cri.go:89] found id: ""
	I1210 07:09:56.011830  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.011840  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:56.011851  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:56.011968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:56.038072  303437 cri.go:89] found id: ""
	I1210 07:09:56.038104  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.038114  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:56.038120  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:56.038198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:56.068512  303437 cri.go:89] found id: ""
	I1210 07:09:56.068586  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.068610  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:56.068629  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:56.068716  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:56.094431  303437 cri.go:89] found id: ""
	I1210 07:09:56.094462  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.094471  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:56.094478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:56.094550  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:56.120840  303437 cri.go:89] found id: ""
	I1210 07:09:56.120865  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.120875  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:56.120881  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:56.120957  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:56.145302  303437 cri.go:89] found id: ""
	I1210 07:09:56.145335  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.145344  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:56.145350  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:56.145415  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:56.177802  303437 cri.go:89] found id: ""
	I1210 07:09:56.177828  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.177837  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:56.177843  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:56.177903  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:56.217508  303437 cri.go:89] found id: ""
	I1210 07:09:56.217535  303437 logs.go:282] 0 containers: []
	W1210 07:09:56.217544  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:56.217553  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:56.217565  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:56.236388  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:56.236414  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:56.299818  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:56.290345    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.291927    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.293053    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.294824    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.295281    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:56.290345    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.291927    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.293053    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.294824    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:56.295281    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:56.299836  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:56.299849  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:56.324241  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:56.324274  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:56.351770  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:56.351798  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:58.907151  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:58.920281  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:09:58.920355  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:09:58.951789  303437 cri.go:89] found id: ""
	I1210 07:09:58.951887  303437 logs.go:282] 0 containers: []
	W1210 07:09:58.951924  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:09:58.951955  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:09:58.952033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:09:58.988101  303437 cri.go:89] found id: ""
	I1210 07:09:58.988174  303437 logs.go:282] 0 containers: []
	W1210 07:09:58.988200  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:09:58.988214  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:09:58.988289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:09:59.015007  303437 cri.go:89] found id: ""
	I1210 07:09:59.015061  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.015070  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:09:59.015076  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:09:59.015145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:09:59.041267  303437 cri.go:89] found id: ""
	I1210 07:09:59.041290  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.041299  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:09:59.041305  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:09:59.041364  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:09:59.065295  303437 cri.go:89] found id: ""
	I1210 07:09:59.065317  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.065325  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:59.065332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:09:59.065389  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:09:59.090688  303437 cri.go:89] found id: ""
	I1210 07:09:59.090710  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.090719  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:09:59.090735  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:09:59.090796  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:09:59.123411  303437 cri.go:89] found id: ""
	I1210 07:09:59.123433  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.123442  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:59.123448  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:09:59.123507  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:09:59.148970  303437 cri.go:89] found id: ""
	I1210 07:09:59.148995  303437 logs.go:282] 0 containers: []
	W1210 07:09:59.149003  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:09:59.149013  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:59.149024  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:59.213078  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:59.213112  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:59.229582  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:59.229610  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:59.291341  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:09:59.283620    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.284364    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.285965    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.286418    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.288009    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:09:59.283620    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.284364    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.285965    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.286418    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:09:59.288009    8356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:59.291371  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:09:59.291383  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:09:59.316302  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:09:59.316335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:01.843334  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:01.854638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:01.854715  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:01.880761  303437 cri.go:89] found id: ""
	I1210 07:10:01.880783  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.880792  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:01.880802  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:01.880863  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:01.910547  303437 cri.go:89] found id: ""
	I1210 07:10:01.910582  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.910591  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:01.910597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:01.910659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:01.946840  303437 cri.go:89] found id: ""
	I1210 07:10:01.946868  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.946878  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:01.946885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:01.946947  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:01.978924  303437 cri.go:89] found id: ""
	I1210 07:10:01.978961  303437 logs.go:282] 0 containers: []
	W1210 07:10:01.978970  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:01.978976  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:01.979080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:02.019488  303437 cri.go:89] found id: ""
	I1210 07:10:02.019517  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.019536  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:02.019543  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:02.019630  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:02.046286  303437 cri.go:89] found id: ""
	I1210 07:10:02.046307  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.046319  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:02.046325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:02.046390  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:02.072527  303437 cri.go:89] found id: ""
	I1210 07:10:02.072552  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.072562  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:02.072568  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:02.072631  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:02.097399  303437 cri.go:89] found id: ""
	I1210 07:10:02.097421  303437 logs.go:282] 0 containers: []
	W1210 07:10:02.097430  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:02.097440  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:02.097451  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:02.158615  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:02.158651  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:02.174600  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:02.174685  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:02.250555  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:02.241608    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.242681    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.244544    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.245035    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.246871    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:02.241608    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.242681    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.244544    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.245035    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:02.246871    8472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:02.250577  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:02.250590  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:02.276945  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:02.276982  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:04.815961  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:04.826415  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:04.826482  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:04.851192  303437 cri.go:89] found id: ""
	I1210 07:10:04.851217  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.851226  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:04.851233  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:04.851295  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:04.880601  303437 cri.go:89] found id: ""
	I1210 07:10:04.880623  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.880632  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:04.880639  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:04.880700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:04.910922  303437 cri.go:89] found id: ""
	I1210 07:10:04.910944  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.910954  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:04.910960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:04.911053  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:04.945097  303437 cri.go:89] found id: ""
	I1210 07:10:04.945122  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.945131  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:04.945137  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:04.945198  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:04.976739  303437 cri.go:89] found id: ""
	I1210 07:10:04.976759  303437 logs.go:282] 0 containers: []
	W1210 07:10:04.976768  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:04.976774  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:04.976828  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:05.004094  303437 cri.go:89] found id: ""
	I1210 07:10:05.004126  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.004136  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:05.004143  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:05.004221  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:05.031557  303437 cri.go:89] found id: ""
	I1210 07:10:05.031582  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.031591  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:05.031598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:05.031660  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:05.057223  303437 cri.go:89] found id: ""
	I1210 07:10:05.057245  303437 logs.go:282] 0 containers: []
	W1210 07:10:05.057254  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:05.057264  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:05.057277  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:05.070835  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:05.070868  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:05.134682  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:05.126987    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.127646    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129140    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129598    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.131280    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:05.126987    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.127646    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129140    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.129598    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:05.131280    8584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:05.134701  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:05.134713  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:05.161896  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:05.161984  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:05.199637  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:05.199661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:07.763534  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:07.773915  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:07.773983  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:07.800754  303437 cri.go:89] found id: ""
	I1210 07:10:07.800778  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.800788  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:07.800794  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:07.800856  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:07.826430  303437 cri.go:89] found id: ""
	I1210 07:10:07.826453  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.826462  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:07.826468  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:07.826527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:07.850496  303437 cri.go:89] found id: ""
	I1210 07:10:07.850517  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.850528  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:07.850534  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:07.850592  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:07.875524  303437 cri.go:89] found id: ""
	I1210 07:10:07.875546  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.875555  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:07.875561  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:07.875622  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:07.905072  303437 cri.go:89] found id: ""
	I1210 07:10:07.905094  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.905103  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:07.905109  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:07.905189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:07.936426  303437 cri.go:89] found id: ""
	I1210 07:10:07.936449  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.936457  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:07.936464  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:07.936527  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:07.973539  303437 cri.go:89] found id: ""
	I1210 07:10:07.973618  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.973640  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:07.973659  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:07.973772  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:07.999823  303437 cri.go:89] found id: ""
	I1210 07:10:07.999914  303437 logs.go:282] 0 containers: []
	W1210 07:10:07.999941  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:07.999964  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:08.000003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:08.068982  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:08.060765    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062197    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062643    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064190    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064500    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:08.060765    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062197    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.062643    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064190    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:08.064500    8694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:08.069056  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:08.069079  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:08.094318  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:08.094351  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:08.122292  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:08.122320  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:08.184455  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:08.184505  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:10.701562  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:10.711949  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:10.712015  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:10.737041  303437 cri.go:89] found id: ""
	I1210 07:10:10.737068  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.737078  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:10.737085  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:10.737152  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:10.766737  303437 cri.go:89] found id: ""
	I1210 07:10:10.766759  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.766769  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:10.766775  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:10.766833  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:10.795664  303437 cri.go:89] found id: ""
	I1210 07:10:10.795689  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.795698  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:10.795705  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:10.795763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:10.819880  303437 cri.go:89] found id: ""
	I1210 07:10:10.819908  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.819917  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:10.819924  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:10.819986  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:10.843991  303437 cri.go:89] found id: ""
	I1210 07:10:10.844028  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.844037  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:10.844043  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:10.844121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:10.868988  303437 cri.go:89] found id: ""
	I1210 07:10:10.869010  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.869019  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:10.869025  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:10.869088  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:10.893331  303437 cri.go:89] found id: ""
	I1210 07:10:10.893361  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.893371  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:10.893392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:10.893473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:10.925989  303437 cri.go:89] found id: ""
	I1210 07:10:10.926016  303437 logs.go:282] 0 containers: []
	W1210 07:10:10.926025  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:10.926034  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:10.926045  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:10.951381  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:10.951417  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:10.992523  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:10.992547  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:11.048715  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:11.048751  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:11.062864  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:11.062892  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:11.126862  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:11.117747    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.118147    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.119641    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.120260    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.122142    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:11.117747    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.118147    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.119641    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.120260    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:11.122142    8825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:13.627173  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:13.640121  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:13.640189  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:13.666074  303437 cri.go:89] found id: ""
	I1210 07:10:13.666097  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.666106  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:13.666112  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:13.666172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:13.694979  303437 cri.go:89] found id: ""
	I1210 07:10:13.695001  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.695043  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:13.695051  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:13.695110  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:13.719004  303437 cri.go:89] found id: ""
	I1210 07:10:13.719045  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.719054  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:13.719066  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:13.719128  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:13.743528  303437 cri.go:89] found id: ""
	I1210 07:10:13.743592  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.743614  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:13.743627  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:13.743700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:13.773695  303437 cri.go:89] found id: ""
	I1210 07:10:13.773720  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.773737  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:13.773743  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:13.773802  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:13.797583  303437 cri.go:89] found id: ""
	I1210 07:10:13.797605  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.797614  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:13.797620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:13.797678  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:13.825318  303437 cri.go:89] found id: ""
	I1210 07:10:13.825348  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.825357  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:13.825363  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:13.825420  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:13.853561  303437 cri.go:89] found id: ""
	I1210 07:10:13.853585  303437 logs.go:282] 0 containers: []
	W1210 07:10:13.853594  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:13.853604  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:13.853622  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:13.935926  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:13.915703    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.920870    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.921425    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923146    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923621    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:13.915703    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.920870    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.921425    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923146    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:13.923621    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:13.935954  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:13.935967  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:13.962598  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:13.962630  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:13.990458  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:13.990484  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:14.047843  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:14.047880  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:16.562478  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:16.576152  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:16.576222  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:16.604031  303437 cri.go:89] found id: ""
	I1210 07:10:16.604054  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.604063  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:16.604069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:16.604128  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:16.628609  303437 cri.go:89] found id: ""
	I1210 07:10:16.628631  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.628640  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:16.628658  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:16.628717  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:16.653619  303437 cri.go:89] found id: ""
	I1210 07:10:16.653656  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.653665  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:16.653671  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:16.653756  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:16.682568  303437 cri.go:89] found id: ""
	I1210 07:10:16.682604  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.682613  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:16.682620  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:16.682693  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:16.707801  303437 cri.go:89] found id: ""
	I1210 07:10:16.707835  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.707845  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:16.707852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:16.707935  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:16.732620  303437 cri.go:89] found id: ""
	I1210 07:10:16.732688  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.732711  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:16.732728  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:16.732825  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:16.758445  303437 cri.go:89] found id: ""
	I1210 07:10:16.758467  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.758475  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:16.758482  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:16.758539  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:16.783975  303437 cri.go:89] found id: ""
	I1210 07:10:16.784001  303437 logs.go:282] 0 containers: []
	W1210 07:10:16.784010  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:16.784019  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:16.784047  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:16.814022  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:16.814049  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:16.869237  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:16.869269  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:16.882654  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:16.882731  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:16.969042  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:16.957319    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.958047    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.960851    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.961625    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.964373    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:16.957319    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.958047    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.960851    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.961625    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:16.964373    9042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:16.969064  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:16.969086  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:19.496234  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:19.506951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:19.507093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:19.530611  303437 cri.go:89] found id: ""
	I1210 07:10:19.530643  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.530652  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:19.530658  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:19.530727  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:19.557799  303437 cri.go:89] found id: ""
	I1210 07:10:19.557835  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.557845  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:19.557852  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:19.557920  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:19.582933  303437 cri.go:89] found id: ""
	I1210 07:10:19.582967  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.582976  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:19.582983  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:19.583072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:19.607826  303437 cri.go:89] found id: ""
	I1210 07:10:19.607889  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.607909  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:19.607917  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:19.607979  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:19.632512  303437 cri.go:89] found id: ""
	I1210 07:10:19.632580  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.632597  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:19.632604  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:19.632665  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:19.657636  303437 cri.go:89] found id: ""
	I1210 07:10:19.657668  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.657677  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:19.657684  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:19.657765  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:19.682353  303437 cri.go:89] found id: ""
	I1210 07:10:19.682423  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.682456  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:19.682476  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:19.682562  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:19.706488  303437 cri.go:89] found id: ""
	I1210 07:10:19.706549  303437 logs.go:282] 0 containers: []
	W1210 07:10:19.706582  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:19.706606  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:19.706644  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:19.719694  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:19.719721  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:19.784893  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:19.777331    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.777928    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.779521    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.780075    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.781604    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:19.777331    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.777928    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.779521    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.780075    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:19.781604    9145 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:19.784915  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:19.784928  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:19.809606  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:19.809641  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:19.841622  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:19.841657  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:22.397071  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:22.407225  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:22.407298  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:22.443280  303437 cri.go:89] found id: ""
	I1210 07:10:22.443304  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.443313  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:22.443320  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:22.443377  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:22.476100  303437 cri.go:89] found id: ""
	I1210 07:10:22.476121  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.476130  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:22.476136  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:22.476197  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:22.504294  303437 cri.go:89] found id: ""
	I1210 07:10:22.504317  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.504326  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:22.504332  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:22.504388  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:22.527983  303437 cri.go:89] found id: ""
	I1210 07:10:22.528006  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.528015  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:22.528028  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:22.528085  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:22.552219  303437 cri.go:89] found id: ""
	I1210 07:10:22.552243  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.552252  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:22.552257  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:22.552314  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:22.576437  303437 cri.go:89] found id: ""
	I1210 07:10:22.576459  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.576469  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:22.576475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:22.576530  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:22.601577  303437 cri.go:89] found id: ""
	I1210 07:10:22.601599  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.601608  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:22.601614  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:22.601671  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:22.625855  303437 cri.go:89] found id: ""
	I1210 07:10:22.625878  303437 logs.go:282] 0 containers: []
	W1210 07:10:22.625889  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:22.625899  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:22.625910  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:22.681686  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:22.681732  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:22.695126  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:22.695154  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:22.758688  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:22.751337    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.752263    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.753775    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.754238    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.755632    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:22.751337    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.752263    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.753775    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.754238    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:22.755632    9260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:22.758709  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:22.758722  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:22.783636  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:22.783671  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:25.311139  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:25.321885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:25.321968  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:25.346177  303437 cri.go:89] found id: ""
	I1210 07:10:25.346257  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.346280  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:25.346299  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:25.346402  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:25.371678  303437 cri.go:89] found id: ""
	I1210 07:10:25.371751  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.371766  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:25.371773  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:25.371836  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:25.404393  303437 cri.go:89] found id: ""
	I1210 07:10:25.404419  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.404436  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:25.404450  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:25.404528  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:25.439726  303437 cri.go:89] found id: ""
	I1210 07:10:25.439766  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.439779  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:25.439803  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:25.439965  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:25.476965  303437 cri.go:89] found id: ""
	I1210 07:10:25.476998  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.477007  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:25.477018  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:25.477127  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:25.502342  303437 cri.go:89] found id: ""
	I1210 07:10:25.502369  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.502378  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:25.502385  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:25.502451  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:25.528396  303437 cri.go:89] found id: ""
	I1210 07:10:25.528423  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.528432  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:25.528439  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:25.528543  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:25.555005  303437 cri.go:89] found id: ""
	I1210 07:10:25.555065  303437 logs.go:282] 0 containers: []
	W1210 07:10:25.555074  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:25.555083  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:25.555095  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:25.568421  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:25.568450  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:25.629120  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:25.622083    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.622649    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624105    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624522    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.626001    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:25.622083    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.622649    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624105    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.624522    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:25.626001    9365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:25.629143  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:25.629155  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:25.654736  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:25.654768  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:25.685404  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:25.685473  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:28.247164  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:28.257638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:28.257709  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:28.283706  303437 cri.go:89] found id: ""
	I1210 07:10:28.283729  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.283738  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:28.283744  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:28.283806  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:28.311304  303437 cri.go:89] found id: ""
	I1210 07:10:28.311327  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.311336  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:28.311342  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:28.311407  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:28.336026  303437 cri.go:89] found id: ""
	I1210 07:10:28.336048  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.336056  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:28.336062  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:28.336121  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:28.361333  303437 cri.go:89] found id: ""
	I1210 07:10:28.361354  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.361362  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:28.361369  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:28.361428  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:28.389101  303437 cri.go:89] found id: ""
	I1210 07:10:28.389123  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.389132  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:28.389138  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:28.389196  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:28.422619  303437 cri.go:89] found id: ""
	I1210 07:10:28.422641  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.422649  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:28.422656  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:28.422713  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:28.453144  303437 cri.go:89] found id: ""
	I1210 07:10:28.453217  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.453240  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:28.453260  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:28.453347  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:28.483124  303437 cri.go:89] found id: ""
	I1210 07:10:28.483148  303437 logs.go:282] 0 containers: []
	W1210 07:10:28.483158  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:28.483167  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:28.483178  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:28.496766  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:28.496793  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:28.563971  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:28.556200    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.557736    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.558089    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559546    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559812    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:28.556200    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.557736    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.558089    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559546    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:28.559812    9478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:28.564003  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:28.564015  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:28.588981  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:28.589012  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:28.617971  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:28.618000  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:31.175214  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:31.187495  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:31.187568  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:31.221446  303437 cri.go:89] found id: ""
	I1210 07:10:31.221473  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.221482  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:31.221488  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:31.221548  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:31.246343  303437 cri.go:89] found id: ""
	I1210 07:10:31.246377  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.246386  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:31.246392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:31.246459  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:31.270266  303437 cri.go:89] found id: ""
	I1210 07:10:31.270289  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.270303  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:31.270309  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:31.270365  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:31.295166  303437 cri.go:89] found id: ""
	I1210 07:10:31.295190  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.295199  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:31.295219  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:31.295284  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:31.320783  303437 cri.go:89] found id: ""
	I1210 07:10:31.320822  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.320831  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:31.320838  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:31.320902  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:31.344885  303437 cri.go:89] found id: ""
	I1210 07:10:31.344910  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.344919  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:31.344927  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:31.344984  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:31.369604  303437 cri.go:89] found id: ""
	I1210 07:10:31.369627  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.369636  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:31.369642  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:31.369700  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:31.396633  303437 cri.go:89] found id: ""
	I1210 07:10:31.396654  303437 logs.go:282] 0 containers: []
	W1210 07:10:31.396663  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:31.396672  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:31.396685  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:31.458644  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:31.458678  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:31.474603  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:31.474632  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:31.540901  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:31.533490    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.534340    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.535925    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.536236    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.537709    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:31.533490    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.534340    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.535925    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.536236    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:31.537709    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:31.540921  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:31.540933  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:31.565730  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:31.565763  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:34.098229  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:34.108967  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:34.109037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:34.137131  303437 cri.go:89] found id: ""
	I1210 07:10:34.137153  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.137162  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:34.137168  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:34.137224  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:34.171468  303437 cri.go:89] found id: ""
	I1210 07:10:34.171489  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.171498  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:34.171504  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:34.171565  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:34.199509  303437 cri.go:89] found id: ""
	I1210 07:10:34.199531  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.199539  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:34.199545  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:34.199603  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:34.230270  303437 cri.go:89] found id: ""
	I1210 07:10:34.230292  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.230301  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:34.230308  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:34.230368  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:34.257508  303437 cri.go:89] found id: ""
	I1210 07:10:34.257529  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.257538  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:34.257544  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:34.257598  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:34.285487  303437 cri.go:89] found id: ""
	I1210 07:10:34.285509  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.285517  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:34.285524  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:34.285584  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:34.312438  303437 cri.go:89] found id: ""
	I1210 07:10:34.312460  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.312469  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:34.312475  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:34.312535  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:34.336063  303437 cri.go:89] found id: ""
	I1210 07:10:34.336137  303437 logs.go:282] 0 containers: []
	W1210 07:10:34.336152  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:34.336161  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:34.336172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:34.392136  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:34.392168  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:34.405661  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:34.405691  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:34.486073  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:34.478058    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.478642    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480446    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480885    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.482506    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:34.478058    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.478642    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480446    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.480885    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:34.482506    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:34.486096  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:34.486110  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:34.512711  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:34.512745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:37.043733  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:37.054272  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:37.054343  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:37.080616  303437 cri.go:89] found id: ""
	I1210 07:10:37.080640  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.080649  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:37.080656  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:37.080716  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:37.104975  303437 cri.go:89] found id: ""
	I1210 07:10:37.105002  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.105010  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:37.105017  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:37.105077  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:37.128929  303437 cri.go:89] found id: ""
	I1210 07:10:37.128952  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.128960  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:37.128966  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:37.129026  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:37.154538  303437 cri.go:89] found id: ""
	I1210 07:10:37.154561  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.154570  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:37.154577  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:37.154637  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:37.183900  303437 cri.go:89] found id: ""
	I1210 07:10:37.183920  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.183928  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:37.183934  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:37.183994  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:37.218659  303437 cri.go:89] found id: ""
	I1210 07:10:37.218681  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.218689  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:37.218696  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:37.218758  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:37.243786  303437 cri.go:89] found id: ""
	I1210 07:10:37.243808  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.243817  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:37.243824  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:37.243889  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:37.271822  303437 cri.go:89] found id: ""
	I1210 07:10:37.271847  303437 logs.go:282] 0 containers: []
	W1210 07:10:37.271856  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:37.271865  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:37.271877  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:37.327230  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:37.327261  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:37.340728  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:37.340755  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:37.402472  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:37.395392    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.395764    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397322    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397745    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.399404    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:37.395392    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.395764    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397322    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.397745    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:37.399404    9824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:37.402534  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:37.402560  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:37.428514  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:37.428587  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:39.957676  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:39.968353  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:39.968422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:39.996461  303437 cri.go:89] found id: ""
	I1210 07:10:39.996487  303437 logs.go:282] 0 containers: []
	W1210 07:10:39.996497  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:39.996504  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:39.996572  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:40.052529  303437 cri.go:89] found id: ""
	I1210 07:10:40.052553  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.052563  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:40.052570  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:40.052635  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:40.083247  303437 cri.go:89] found id: ""
	I1210 07:10:40.083272  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.083282  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:40.083288  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:40.083349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:40.109171  303437 cri.go:89] found id: ""
	I1210 07:10:40.109195  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.109204  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:40.109211  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:40.109271  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:40.138871  303437 cri.go:89] found id: ""
	I1210 07:10:40.138950  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.138972  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:40.138992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:40.139100  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:40.176299  303437 cri.go:89] found id: ""
	I1210 07:10:40.176335  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.176345  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:40.176352  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:40.176448  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:40.213557  303437 cri.go:89] found id: ""
	I1210 07:10:40.213590  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.213600  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:40.213622  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:40.213706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:40.253605  303437 cri.go:89] found id: ""
	I1210 07:10:40.253639  303437 logs.go:282] 0 containers: []
	W1210 07:10:40.253648  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:40.253658  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:40.253670  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:40.289048  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:40.289076  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:40.348311  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:40.348344  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:40.364207  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:40.364249  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:40.431287  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:40.422606    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.423275    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.424961    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.425595    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.427272    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:40.422606    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.423275    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.424961    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.425595    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:40.427272    9947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:40.431309  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:40.431325  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:42.962817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:42.973583  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:42.973714  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:43.004181  303437 cri.go:89] found id: ""
	I1210 07:10:43.004211  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.004222  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:43.004235  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:43.004302  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:43.031231  303437 cri.go:89] found id: ""
	I1210 07:10:43.031252  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.031261  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:43.031267  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:43.031324  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:43.056959  303437 cri.go:89] found id: ""
	I1210 07:10:43.056991  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.057002  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:43.057009  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:43.057072  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:43.086361  303437 cri.go:89] found id: ""
	I1210 07:10:43.086393  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.086403  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:43.086413  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:43.086481  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:43.112977  303437 cri.go:89] found id: ""
	I1210 07:10:43.113003  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.113013  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:43.113020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:43.113079  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:43.137716  303437 cri.go:89] found id: ""
	I1210 07:10:43.137740  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.137749  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:43.137755  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:43.137814  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:43.173396  303437 cri.go:89] found id: ""
	I1210 07:10:43.173421  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.173431  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:43.173437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:43.173494  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:43.202828  303437 cri.go:89] found id: ""
	I1210 07:10:43.202852  303437 logs.go:282] 0 containers: []
	W1210 07:10:43.202861  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:43.202871  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:43.202885  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:43.265997  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:43.266036  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:43.281547  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:43.281582  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:43.359532  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:43.352125   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.352633   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354207   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354531   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.356009   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:43.352125   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.352633   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354207   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.354531   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:43.356009   10046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:43.359554  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:43.359567  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:43.392377  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:43.392433  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:45.942739  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:45.955296  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:45.955374  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:45.984462  303437 cri.go:89] found id: ""
	I1210 07:10:45.984488  303437 logs.go:282] 0 containers: []
	W1210 07:10:45.984497  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:45.984507  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:45.984566  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:46.014873  303437 cri.go:89] found id: ""
	I1210 07:10:46.014898  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.014920  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:46.014928  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:46.015038  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:46.044539  303437 cri.go:89] found id: ""
	I1210 07:10:46.044565  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.044574  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:46.044581  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:46.044642  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:46.070950  303437 cri.go:89] found id: ""
	I1210 07:10:46.070975  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.070985  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:46.070992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:46.071091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:46.101134  303437 cri.go:89] found id: ""
	I1210 07:10:46.101160  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.101170  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:46.101176  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:46.101255  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:46.126003  303437 cri.go:89] found id: ""
	I1210 07:10:46.126028  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.126037  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:46.126044  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:46.126103  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:46.152209  303437 cri.go:89] found id: ""
	I1210 07:10:46.152231  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.152239  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:46.152245  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:46.152303  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:46.183764  303437 cri.go:89] found id: ""
	I1210 07:10:46.183786  303437 logs.go:282] 0 containers: []
	W1210 07:10:46.183794  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:46.183803  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:46.183813  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:46.248135  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:46.248173  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:46.262749  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:46.262778  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:46.330280  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:46.322629   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.323199   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.324997   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.325371   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.326892   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:46.322629   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.323199   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.324997   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.325371   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:46.326892   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:46.330302  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:46.330315  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:46.356151  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:46.356184  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:48.884130  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:48.894898  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:48.894989  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:48.919239  303437 cri.go:89] found id: ""
	I1210 07:10:48.919266  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.919275  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:48.919282  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:48.919343  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:48.946463  303437 cri.go:89] found id: ""
	I1210 07:10:48.946487  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.946497  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:48.946509  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:48.946569  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:48.971661  303437 cri.go:89] found id: ""
	I1210 07:10:48.971735  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.971757  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:48.971772  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:48.971857  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:48.996435  303437 cri.go:89] found id: ""
	I1210 07:10:48.996457  303437 logs.go:282] 0 containers: []
	W1210 07:10:48.996466  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:48.996472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:48.996539  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:49.023269  303437 cri.go:89] found id: ""
	I1210 07:10:49.023296  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.023305  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:49.023311  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:49.023371  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:49.052018  303437 cri.go:89] found id: ""
	I1210 07:10:49.052042  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.052051  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:49.052058  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:49.052125  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:49.076866  303437 cri.go:89] found id: ""
	I1210 07:10:49.076929  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.076943  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:49.076951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:49.077009  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:49.105029  303437 cri.go:89] found id: ""
	I1210 07:10:49.105051  303437 logs.go:282] 0 containers: []
	W1210 07:10:49.105061  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:49.105070  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:49.105081  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:49.161025  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:49.161103  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:49.176997  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:49.177065  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:49.246287  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:49.238608   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.239236   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.240909   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.241468   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.243158   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:49.238608   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.239236   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.240909   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.241468   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:49.243158   10272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:49.246359  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:49.246386  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:49.271827  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:49.271865  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:51.801611  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:51.812172  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:51.812240  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:51.836841  303437 cri.go:89] found id: ""
	I1210 07:10:51.836864  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.836874  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:51.836880  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:51.836942  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:51.860730  303437 cri.go:89] found id: ""
	I1210 07:10:51.860754  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.860764  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:51.860770  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:51.860831  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:51.885358  303437 cri.go:89] found id: ""
	I1210 07:10:51.885379  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.885388  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:51.885394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:51.885452  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:51.909974  303437 cri.go:89] found id: ""
	I1210 07:10:51.910038  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.910062  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:51.910080  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:51.910152  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:51.938488  303437 cri.go:89] found id: ""
	I1210 07:10:51.938553  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.938577  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:51.938596  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:51.938669  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:51.964789  303437 cri.go:89] found id: ""
	I1210 07:10:51.964821  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.964831  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:51.964837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:51.964914  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:51.988457  303437 cri.go:89] found id: ""
	I1210 07:10:51.988478  303437 logs.go:282] 0 containers: []
	W1210 07:10:51.988487  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:51.988493  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:51.988553  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:52.032140  303437 cri.go:89] found id: ""
	I1210 07:10:52.032164  303437 logs.go:282] 0 containers: []
	W1210 07:10:52.032177  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:52.032187  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:52.032198  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:52.058273  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:52.058311  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:52.089897  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:52.089924  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:52.145350  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:52.145387  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:52.162441  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:52.162475  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:52.244944  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:52.233560   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.234752   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.239562   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.240120   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.241681   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:52.233560   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.234752   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.239562   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.240120   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:52.241681   10401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:54.746617  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:54.757597  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:54.757677  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:54.785180  303437 cri.go:89] found id: ""
	I1210 07:10:54.785205  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.785215  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:54.785222  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:54.785283  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:54.813159  303437 cri.go:89] found id: ""
	I1210 07:10:54.813184  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.813193  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:54.813200  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:54.813258  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:54.840481  303437 cri.go:89] found id: ""
	I1210 07:10:54.840503  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.840512  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:54.840519  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:54.840578  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:54.869478  303437 cri.go:89] found id: ""
	I1210 07:10:54.869500  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.869509  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:54.869516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:54.869573  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:54.892998  303437 cri.go:89] found id: ""
	I1210 07:10:54.893020  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.893028  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:54.893034  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:54.893093  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:54.921729  303437 cri.go:89] found id: ""
	I1210 07:10:54.921755  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.921765  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:54.921772  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:54.921838  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:54.946951  303437 cri.go:89] found id: ""
	I1210 07:10:54.946976  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.946985  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:54.946992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:54.947069  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:54.972444  303437 cri.go:89] found id: ""
	I1210 07:10:54.972466  303437 logs.go:282] 0 containers: []
	W1210 07:10:54.972475  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:54.972484  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:54.972502  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:54.997696  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:54.997743  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:55.038495  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:55.038532  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:55.099784  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:55.099825  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:55.115531  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:55.115561  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:55.193319  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:55.183569   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.187128   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188202   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188540   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.190127   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:55.183569   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.187128   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188202   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.188540   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:55.190127   10508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:57.693558  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:57.704587  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:10:57.704698  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:10:57.733113  303437 cri.go:89] found id: ""
	I1210 07:10:57.733137  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.733147  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:10:57.733154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:10:57.733217  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:10:57.759697  303437 cri.go:89] found id: ""
	I1210 07:10:57.759721  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.759730  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:10:57.759736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:10:57.759813  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:10:57.785244  303437 cri.go:89] found id: ""
	I1210 07:10:57.785273  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.785282  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:10:57.785288  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:10:57.785349  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:10:57.819299  303437 cri.go:89] found id: ""
	I1210 07:10:57.819324  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.819333  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:10:57.819339  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:10:57.819397  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:10:57.843698  303437 cri.go:89] found id: ""
	I1210 07:10:57.843720  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.843729  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:57.843736  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:10:57.843797  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:10:57.867903  303437 cri.go:89] found id: ""
	I1210 07:10:57.867928  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.867938  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:10:57.867944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:10:57.868003  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:10:57.892038  303437 cri.go:89] found id: ""
	I1210 07:10:57.892065  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.892074  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:57.892080  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:10:57.892144  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:10:57.917032  303437 cri.go:89] found id: ""
	I1210 07:10:57.917055  303437 logs.go:282] 0 containers: []
	W1210 07:10:57.917064  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:10:57.917073  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:57.917084  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:57.972772  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:57.972808  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:57.986446  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:57.986475  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:58.053540  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:10:58.045445   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.046514   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.047201   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.048794   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.049108   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:10:58.045445   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.046514   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.047201   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.048794   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:10:58.049108   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:58.053559  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:10:58.053572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:10:58.078999  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:10:58.079080  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:00.609346  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:00.620922  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:00.620998  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:00.647744  303437 cri.go:89] found id: ""
	I1210 07:11:00.647766  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.647775  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:00.647781  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:00.647838  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:00.685141  303437 cri.go:89] found id: ""
	I1210 07:11:00.685162  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.685171  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:00.685177  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:00.685237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:00.713949  303437 cri.go:89] found id: ""
	I1210 07:11:00.713971  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.713980  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:00.713986  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:00.714045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:00.740428  303437 cri.go:89] found id: ""
	I1210 07:11:00.740453  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.740463  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:00.740471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:00.740531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:00.765430  303437 cri.go:89] found id: ""
	I1210 07:11:00.765455  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.765464  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:00.765471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:00.765529  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:00.790771  303437 cri.go:89] found id: ""
	I1210 07:11:00.790797  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.790806  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:00.790813  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:00.790871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:00.817430  303437 cri.go:89] found id: ""
	I1210 07:11:00.817456  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.817465  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:00.817471  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:00.817531  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:00.841761  303437 cri.go:89] found id: ""
	I1210 07:11:00.841785  303437 logs.go:282] 0 containers: []
	W1210 07:11:00.841794  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:00.841803  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:00.841817  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:00.855324  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:00.855351  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:00.926358  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:00.918056   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.918855   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.920644   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.921186   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.922891   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:00.918056   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.918855   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.920644   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.921186   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:00.922891   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:00.926380  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:00.926394  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:00.951644  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:00.951678  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:00.979845  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:00.979875  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:03.540927  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:03.551392  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:03.551462  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:03.576792  303437 cri.go:89] found id: ""
	I1210 07:11:03.576821  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.576830  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:03.576837  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:03.576896  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:03.601193  303437 cri.go:89] found id: ""
	I1210 07:11:03.601216  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.601225  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:03.601233  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:03.601290  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:03.626528  303437 cri.go:89] found id: ""
	I1210 07:11:03.626550  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.626559  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:03.626565  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:03.626624  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:03.656106  303437 cri.go:89] found id: ""
	I1210 07:11:03.656128  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.656137  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:03.656149  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:03.656206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:03.691936  303437 cri.go:89] found id: ""
	I1210 07:11:03.691960  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.691970  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:03.691976  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:03.692037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:03.721295  303437 cri.go:89] found id: ""
	I1210 07:11:03.721321  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.721331  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:03.721338  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:03.721409  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:03.750080  303437 cri.go:89] found id: ""
	I1210 07:11:03.750105  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.750114  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:03.750121  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:03.750205  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:03.777748  303437 cri.go:89] found id: ""
	I1210 07:11:03.777771  303437 logs.go:282] 0 containers: []
	W1210 07:11:03.777780  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:03.777815  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:03.777836  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:03.792128  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:03.792159  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:03.859337  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:03.851981   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.852547   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854609   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.856112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:03.851981   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.852547   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.854609   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:03.856112   10836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:03.859358  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:03.859371  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:03.885445  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:03.885482  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:03.915897  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:03.915925  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:06.473632  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:06.484351  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:06.484431  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:06.509957  303437 cri.go:89] found id: ""
	I1210 07:11:06.509982  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.509991  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:06.509997  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:06.510061  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:06.537150  303437 cri.go:89] found id: ""
	I1210 07:11:06.537175  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.537185  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:06.537195  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:06.537255  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:06.571765  303437 cri.go:89] found id: ""
	I1210 07:11:06.571789  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.571798  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:06.571804  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:06.571872  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:06.600905  303437 cri.go:89] found id: ""
	I1210 07:11:06.600928  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.600938  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:06.600944  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:06.601007  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:06.625296  303437 cri.go:89] found id: ""
	I1210 07:11:06.625320  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.625329  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:06.625335  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:06.625396  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:06.653467  303437 cri.go:89] found id: ""
	I1210 07:11:06.653490  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.653499  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:06.653505  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:06.653563  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:06.693284  303437 cri.go:89] found id: ""
	I1210 07:11:06.693309  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.693319  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:06.693325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:06.693385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:06.731038  303437 cri.go:89] found id: ""
	I1210 07:11:06.731061  303437 logs.go:282] 0 containers: []
	W1210 07:11:06.731069  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:06.731079  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:06.731091  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:06.744632  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:06.744661  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:06.805649  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:06.797284   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.798135   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.799790   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.800106   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.801600   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:06.797284   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.798135   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.799790   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.800106   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:06.801600   10950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:06.805675  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:06.805697  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:06.830881  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:06.830917  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:06.859403  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:06.859429  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:09.415956  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:09.428117  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:09.428237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:09.457364  303437 cri.go:89] found id: ""
	I1210 07:11:09.457426  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.457457  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:09.457478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:09.457570  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:09.487281  303437 cri.go:89] found id: ""
	I1210 07:11:09.487343  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.487375  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:09.487395  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:09.487481  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:09.512841  303437 cri.go:89] found id: ""
	I1210 07:11:09.512912  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.512945  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:09.512964  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:09.513056  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:09.538740  303437 cri.go:89] found id: ""
	I1210 07:11:09.538824  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.538855  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:09.538885  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:09.538979  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:09.566651  303437 cri.go:89] found id: ""
	I1210 07:11:09.566692  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.566718  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:09.566732  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:09.566811  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:09.591707  303437 cri.go:89] found id: ""
	I1210 07:11:09.591782  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.591798  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:09.591808  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:09.591866  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:09.620542  303437 cri.go:89] found id: ""
	I1210 07:11:09.620568  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.620577  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:09.620584  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:09.620642  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:09.649059  303437 cri.go:89] found id: ""
	I1210 07:11:09.649082  303437 logs.go:282] 0 containers: []
	W1210 07:11:09.649091  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:09.649100  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:09.649111  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:09.674480  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:09.674512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:09.715383  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:09.715410  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:09.775480  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:09.775512  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:09.788719  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:09.788798  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:09.855981  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:09.848870   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.849294   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.850550   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.851138   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.852720   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:09.848870   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.849294   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.850550   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.851138   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:09.852720   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:12.356259  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:12.366697  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:12.366763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:12.390732  303437 cri.go:89] found id: ""
	I1210 07:11:12.390756  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.390764  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:12.390771  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:12.390826  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:12.430569  303437 cri.go:89] found id: ""
	I1210 07:11:12.430619  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.430631  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:12.430638  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:12.430704  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:12.477376  303437 cri.go:89] found id: ""
	I1210 07:11:12.477398  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.477406  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:12.477412  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:12.477483  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:12.503110  303437 cri.go:89] found id: ""
	I1210 07:11:12.503132  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.503140  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:12.503147  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:12.503206  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:12.527661  303437 cri.go:89] found id: ""
	I1210 07:11:12.527683  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.527691  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:12.527698  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:12.527757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:12.552603  303437 cri.go:89] found id: ""
	I1210 07:11:12.552624  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.552632  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:12.552639  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:12.552701  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:12.576969  303437 cri.go:89] found id: ""
	I1210 07:11:12.576991  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.576999  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:12.577005  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:12.577074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:12.602537  303437 cri.go:89] found id: ""
	I1210 07:11:12.602559  303437 logs.go:282] 0 containers: []
	W1210 07:11:12.602568  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:12.602577  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:12.602589  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:12.660382  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:12.660462  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:12.675575  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:12.675600  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:12.748937  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:12.741330   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.741988   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.743656   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.744158   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.745748   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:12.741330   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.741988   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.743656   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.744158   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:12.745748   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:12.748957  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:12.748970  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:12.773717  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:12.773752  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:15.305384  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:15.315713  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:15.315783  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:15.340655  303437 cri.go:89] found id: ""
	I1210 07:11:15.340678  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.340687  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:15.340693  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:15.340757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:15.366091  303437 cri.go:89] found id: ""
	I1210 07:11:15.366115  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.366123  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:15.366130  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:15.366187  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:15.392837  303437 cri.go:89] found id: ""
	I1210 07:11:15.392862  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.392871  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:15.392877  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:15.392939  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:15.435313  303437 cri.go:89] found id: ""
	I1210 07:11:15.435340  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.435349  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:15.435356  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:15.435422  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:15.466475  303437 cri.go:89] found id: ""
	I1210 07:11:15.466500  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.466509  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:15.466516  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:15.466575  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:15.497149  303437 cri.go:89] found id: ""
	I1210 07:11:15.497175  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.497184  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:15.497191  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:15.497250  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:15.523660  303437 cri.go:89] found id: ""
	I1210 07:11:15.523725  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.523741  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:15.523748  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:15.523808  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:15.547943  303437 cri.go:89] found id: ""
	I1210 07:11:15.547971  303437 logs.go:282] 0 containers: []
	W1210 07:11:15.547987  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:15.547996  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:15.548007  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:15.603029  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:15.603064  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:15.616115  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:15.616150  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:15.696616  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:15.686858   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.687579   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689227   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689725   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.693083   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:15.686858   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.687579   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689227   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.689725   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:15.693083   11292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:15.696637  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:15.696660  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:15.728162  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:15.728212  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:18.262884  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:18.273396  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:18.273467  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:18.298776  303437 cri.go:89] found id: ""
	I1210 07:11:18.298799  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.298809  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:18.298816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:18.298873  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:18.326358  303437 cri.go:89] found id: ""
	I1210 07:11:18.326431  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.326444  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:18.326472  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:18.326567  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:18.351094  303437 cri.go:89] found id: ""
	I1210 07:11:18.351116  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.351125  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:18.351132  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:18.351190  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:18.376189  303437 cri.go:89] found id: ""
	I1210 07:11:18.376211  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.376220  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:18.376227  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:18.376283  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:18.400127  303437 cri.go:89] found id: ""
	I1210 07:11:18.400151  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.400160  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:18.400166  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:18.400231  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:18.429089  303437 cri.go:89] found id: ""
	I1210 07:11:18.429160  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.429173  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:18.429181  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:18.429304  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:18.462081  303437 cri.go:89] found id: ""
	I1210 07:11:18.462162  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.462174  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:18.462202  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:18.462289  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:18.490007  303437 cri.go:89] found id: ""
	I1210 07:11:18.490081  303437 logs.go:282] 0 containers: []
	W1210 07:11:18.490105  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:18.490128  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:18.490164  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:18.506325  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:18.506400  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:18.582081  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:18.572894   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.573949   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.574774   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.576605   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.577188   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:18.572894   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.573949   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.574774   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.576605   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:18.577188   11406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:18.582154  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:18.582194  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:18.608014  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:18.608047  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:18.637797  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:18.637826  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:21.198374  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:21.208690  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:21.208757  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:21.235678  303437 cri.go:89] found id: ""
	I1210 07:11:21.235701  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.235710  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:21.235723  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:21.235788  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:21.259648  303437 cri.go:89] found id: ""
	I1210 07:11:21.259671  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.259679  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:21.259685  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:21.259742  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:21.284541  303437 cri.go:89] found id: ""
	I1210 07:11:21.284562  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.284571  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:21.284577  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:21.284634  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:21.309347  303437 cri.go:89] found id: ""
	I1210 07:11:21.309371  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.309380  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:21.309386  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:21.309449  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:21.337308  303437 cri.go:89] found id: ""
	I1210 07:11:21.337377  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.337397  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:21.337414  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:21.337498  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:21.362600  303437 cri.go:89] found id: ""
	I1210 07:11:21.362622  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.362631  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:21.362637  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:21.362706  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:21.386909  303437 cri.go:89] found id: ""
	I1210 07:11:21.386934  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.386951  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:21.386959  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:21.387045  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:21.444294  303437 cri.go:89] found id: ""
	I1210 07:11:21.444331  303437 logs.go:282] 0 containers: []
	W1210 07:11:21.444340  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:21.444350  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:21.444361  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:21.537630  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:21.526461   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.527437   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.531792   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.532470   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.534191   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:21.526461   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.527437   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.531792   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.532470   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:21.534191   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:21.537650  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:21.537744  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:21.567303  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:21.567339  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:21.599305  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:21.599333  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:21.660956  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:21.660989  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:24.197663  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:24.209532  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:24.209604  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:24.235185  303437 cri.go:89] found id: ""
	I1210 07:11:24.235207  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.235215  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:24.235222  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:24.235291  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:24.269486  303437 cri.go:89] found id: ""
	I1210 07:11:24.269507  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.269515  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:24.269522  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:24.269580  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:24.295987  303437 cri.go:89] found id: ""
	I1210 07:11:24.296010  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.296018  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:24.296024  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:24.296080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:24.321843  303437 cri.go:89] found id: ""
	I1210 07:11:24.321918  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.321932  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:24.321939  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:24.322070  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:24.349226  303437 cri.go:89] found id: ""
	I1210 07:11:24.349296  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.349309  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:24.349316  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:24.349439  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:24.382513  303437 cri.go:89] found id: ""
	I1210 07:11:24.382595  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.382617  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:24.382636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:24.382759  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:24.423211  303437 cri.go:89] found id: ""
	I1210 07:11:24.423284  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.423306  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:24.423325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:24.423413  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:24.483751  303437 cri.go:89] found id: ""
	I1210 07:11:24.483774  303437 logs.go:282] 0 containers: []
	W1210 07:11:24.483783  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:24.483792  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:24.483831  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:24.554712  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:24.547245   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.548134   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.549755   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.550089   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.551593   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:24.547245   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.548134   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.549755   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.550089   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:24.551593   11627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:24.554746  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:24.554759  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:24.583135  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:24.583172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:24.621794  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:24.621824  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:24.686891  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:24.686927  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:27.212817  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:27.223470  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:27.223540  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:27.250394  303437 cri.go:89] found id: ""
	I1210 07:11:27.250421  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.250431  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:27.250437  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:27.250497  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:27.275076  303437 cri.go:89] found id: ""
	I1210 07:11:27.275099  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.275108  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:27.275114  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:27.275175  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:27.300285  303437 cri.go:89] found id: ""
	I1210 07:11:27.300311  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.300321  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:27.300327  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:27.300389  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:27.324870  303437 cri.go:89] found id: ""
	I1210 07:11:27.324894  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.324904  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:27.324910  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:27.324976  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:27.351041  303437 cri.go:89] found id: ""
	I1210 07:11:27.351063  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.351072  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:27.351079  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:27.351145  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:27.375920  303437 cri.go:89] found id: ""
	I1210 07:11:27.375942  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.375950  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:27.375957  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:27.376016  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:27.400149  303437 cri.go:89] found id: ""
	I1210 07:11:27.400174  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.400183  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:27.400190  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:27.400248  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:27.436160  303437 cri.go:89] found id: ""
	I1210 07:11:27.436192  303437 logs.go:282] 0 containers: []
	W1210 07:11:27.436201  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:27.436211  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:27.436222  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:27.498671  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:27.498704  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:27.512854  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:27.512880  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:27.582038  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:27.573895   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.574782   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576306   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576889   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.578645   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:27.573895   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.574782   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576306   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.576889   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:27.578645   11746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:27.582102  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:27.582129  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:27.610246  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:27.610287  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:30.139493  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:30.150290  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:30.150358  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:30.176970  303437 cri.go:89] found id: ""
	I1210 07:11:30.177000  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.177008  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:30.177015  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:30.177079  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:30.202200  303437 cri.go:89] found id: ""
	I1210 07:11:30.202226  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.202235  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:30.202241  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:30.202300  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:30.226724  303437 cri.go:89] found id: ""
	I1210 07:11:30.226748  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.226757  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:30.226763  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:30.226825  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:30.251813  303437 cri.go:89] found id: ""
	I1210 07:11:30.251835  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.251844  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:30.251850  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:30.251912  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:30.277078  303437 cri.go:89] found id: ""
	I1210 07:11:30.277099  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.277109  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:30.277115  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:30.277172  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:30.305998  303437 cri.go:89] found id: ""
	I1210 07:11:30.306019  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.306027  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:30.306034  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:30.306091  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:30.334810  303437 cri.go:89] found id: ""
	I1210 07:11:30.334831  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.334839  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:30.334846  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:30.334903  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:30.359892  303437 cri.go:89] found id: ""
	I1210 07:11:30.359913  303437 logs.go:282] 0 containers: []
	W1210 07:11:30.359921  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:30.359930  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:30.359940  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:30.385054  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:30.385088  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:30.421360  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:30.421390  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:30.485019  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:30.485051  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:30.498844  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:30.498916  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:30.560538  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:30.552697   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.553538   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.554465   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.555942   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.556285   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:30.552697   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.553538   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.554465   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.555942   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:30.556285   11873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:33.062385  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:33.073083  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:33.073165  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:33.097439  303437 cri.go:89] found id: ""
	I1210 07:11:33.097463  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.097471  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:33.097478  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:33.097540  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:33.124732  303437 cri.go:89] found id: ""
	I1210 07:11:33.124754  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.124763  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:33.124769  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:33.124829  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:33.153513  303437 cri.go:89] found id: ""
	I1210 07:11:33.153536  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.153545  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:33.153550  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:33.153610  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:33.179491  303437 cri.go:89] found id: ""
	I1210 07:11:33.179518  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.179526  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:33.179533  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:33.179593  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:33.205039  303437 cri.go:89] found id: ""
	I1210 07:11:33.205232  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.205248  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:33.205255  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:33.205332  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:33.231637  303437 cri.go:89] found id: ""
	I1210 07:11:33.231661  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.231670  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:33.231677  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:33.231740  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:33.257596  303437 cri.go:89] found id: ""
	I1210 07:11:33.257622  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.257630  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:33.257636  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:33.257702  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:33.283943  303437 cri.go:89] found id: ""
	I1210 07:11:33.283968  303437 logs.go:282] 0 containers: []
	W1210 07:11:33.283978  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:33.283989  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:33.284003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:33.297130  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:33.297162  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:33.358971  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:33.351484   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.351972   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.353499   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.354087   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.355603   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:33.351484   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.351972   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.353499   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.354087   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:33.355603   11961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:33.359004  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:33.359053  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:33.383559  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:33.383593  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:33.411160  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:33.411184  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:35.975172  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:35.985598  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:35.985677  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:36.012649  303437 cri.go:89] found id: ""
	I1210 07:11:36.012687  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.012698  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:36.012705  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:36.012772  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:36.039233  303437 cri.go:89] found id: ""
	I1210 07:11:36.039301  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.039325  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:36.039344  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:36.039440  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:36.064743  303437 cri.go:89] found id: ""
	I1210 07:11:36.064766  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.064775  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:36.064781  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:36.064839  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:36.088939  303437 cri.go:89] found id: ""
	I1210 07:11:36.088961  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.088969  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:36.088975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:36.089037  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:36.116797  303437 cri.go:89] found id: ""
	I1210 07:11:36.116821  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.116830  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:36.116836  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:36.116894  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:36.141419  303437 cri.go:89] found id: ""
	I1210 07:11:36.141447  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.141456  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:36.141463  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:36.141525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:36.166138  303437 cri.go:89] found id: ""
	I1210 07:11:36.166165  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.166174  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:36.166180  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:36.166242  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:36.193939  303437 cri.go:89] found id: ""
	I1210 07:11:36.194014  303437 logs.go:282] 0 containers: []
	W1210 07:11:36.194036  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:36.194058  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:36.194096  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:36.250476  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:36.250507  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:36.263989  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:36.264070  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:36.328452  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:36.320900   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.321474   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323175   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323732   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.325316   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:36.320900   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.321474   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323175   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.323732   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:36.325316   12077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:36.328474  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:36.328487  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:36.353490  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:36.353523  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:38.890866  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:38.901365  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:38.901464  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:38.932423  303437 cri.go:89] found id: ""
	I1210 07:11:38.932450  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.932458  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:38.932465  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:38.932525  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:38.959879  303437 cri.go:89] found id: ""
	I1210 07:11:38.959907  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.959915  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:38.959921  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:38.959978  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:38.986312  303437 cri.go:89] found id: ""
	I1210 07:11:38.986338  303437 logs.go:282] 0 containers: []
	W1210 07:11:38.986347  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:38.986353  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:38.986410  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:39.011808  303437 cri.go:89] found id: ""
	I1210 07:11:39.011830  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.011839  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:39.011845  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:39.011908  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:39.037634  303437 cri.go:89] found id: ""
	I1210 07:11:39.037675  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.037685  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:39.037691  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:39.037763  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:39.062989  303437 cri.go:89] found id: ""
	I1210 07:11:39.063073  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.063096  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:39.063114  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:39.063200  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:39.092710  303437 cri.go:89] found id: ""
	I1210 07:11:39.092732  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.092740  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:39.092749  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:39.092809  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:39.116692  303437 cri.go:89] found id: ""
	I1210 07:11:39.116715  303437 logs.go:282] 0 containers: []
	W1210 07:11:39.116724  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:39.116735  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:39.116745  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:39.173134  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:39.173165  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:39.187543  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:39.187619  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:39.248942  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:39.241567   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.242335   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.243953   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.244270   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.245769   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:39.241567   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.242335   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.243953   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.244270   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:39.245769   12187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:39.248964  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:39.248976  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:39.273536  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:39.273572  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:41.801091  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:41.812394  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:41.812473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:41.838936  303437 cri.go:89] found id: ""
	I1210 07:11:41.839028  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.839042  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:41.839050  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:41.839131  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:41.864566  303437 cri.go:89] found id: ""
	I1210 07:11:41.864593  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.864603  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:41.864609  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:41.864673  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:41.889296  303437 cri.go:89] found id: ""
	I1210 07:11:41.889321  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.889330  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:41.889337  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:41.889396  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:41.915562  303437 cri.go:89] found id: ""
	I1210 07:11:41.915589  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.915601  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:41.915608  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:41.915670  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:41.953369  303437 cri.go:89] found id: ""
	I1210 07:11:41.953395  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.953404  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:41.953410  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:41.953473  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:41.985179  303437 cri.go:89] found id: ""
	I1210 07:11:41.985205  303437 logs.go:282] 0 containers: []
	W1210 07:11:41.985216  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:41.985223  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:41.985327  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:42.015327  303437 cri.go:89] found id: ""
	I1210 07:11:42.015400  303437 logs.go:282] 0 containers: []
	W1210 07:11:42.015424  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:42.015443  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:42.015541  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:42.043382  303437 cri.go:89] found id: ""
	I1210 07:11:42.043407  303437 logs.go:282] 0 containers: []
	W1210 07:11:42.043421  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:42.043431  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:42.043443  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:42.080163  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:42.080196  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:42.139896  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:42.139935  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:42.156701  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:42.156737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:42.234579  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:42.225807   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.226427   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228068   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228448   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.229958   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:42.225807   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.226427   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228068   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.228448   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:42.229958   12309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:42.234662  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:42.234691  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:44.763362  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:44.773978  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:44.774048  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:44.799637  303437 cri.go:89] found id: ""
	I1210 07:11:44.799665  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.799674  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:44.799680  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:44.799741  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:44.827772  303437 cri.go:89] found id: ""
	I1210 07:11:44.827797  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.827806  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:44.827812  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:44.827871  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:44.851977  303437 cri.go:89] found id: ""
	I1210 07:11:44.852005  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.852014  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:44.852020  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:44.852080  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:44.876554  303437 cri.go:89] found id: ""
	I1210 07:11:44.876580  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.876590  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:44.876596  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:44.876658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:44.903100  303437 cri.go:89] found id: ""
	I1210 07:11:44.903132  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.903141  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:44.903154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:44.903215  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:44.933312  303437 cri.go:89] found id: ""
	I1210 07:11:44.933333  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.933342  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:44.933348  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:44.933407  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:44.969458  303437 cri.go:89] found id: ""
	I1210 07:11:44.969530  303437 logs.go:282] 0 containers: []
	W1210 07:11:44.969552  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:44.969569  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:44.969666  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:45.013288  303437 cri.go:89] found id: ""
	I1210 07:11:45.013381  303437 logs.go:282] 0 containers: []
	W1210 07:11:45.013403  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:45.013427  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:45.013468  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:45.111594  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:45.112597  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:45.131602  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:45.131636  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:45.220807  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:45.205512   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.206557   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.208854   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.209321   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.215241   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:45.205512   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.206557   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.208854   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.209321   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:45.215241   12411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:45.220830  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:45.220843  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:45.257708  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:45.257752  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:47.792395  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:47.802865  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:47.802937  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:47.832152  303437 cri.go:89] found id: ""
	I1210 07:11:47.832175  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.832191  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:47.832198  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:47.832262  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:47.856843  303437 cri.go:89] found id: ""
	I1210 07:11:47.856868  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.856877  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:47.856883  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:47.856943  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:47.880564  303437 cri.go:89] found id: ""
	I1210 07:11:47.880586  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.880595  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:47.880601  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:47.880658  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:47.908243  303437 cri.go:89] found id: ""
	I1210 07:11:47.908264  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.908273  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:47.908280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:47.908337  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:47.951940  303437 cri.go:89] found id: ""
	I1210 07:11:47.951961  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.951969  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:47.951975  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:47.952033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:47.986418  303437 cri.go:89] found id: ""
	I1210 07:11:47.986437  303437 logs.go:282] 0 containers: []
	W1210 07:11:47.986446  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:47.986452  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:47.986511  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:48.018032  303437 cri.go:89] found id: ""
	I1210 07:11:48.018055  303437 logs.go:282] 0 containers: []
	W1210 07:11:48.018064  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:48.018069  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:48.018131  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:48.045010  303437 cri.go:89] found id: ""
	I1210 07:11:48.045033  303437 logs.go:282] 0 containers: []
	W1210 07:11:48.045043  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:48.045052  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:48.045063  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:48.070773  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:48.070806  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:48.100419  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:48.100451  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:48.157253  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:48.157287  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:48.171891  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:48.171922  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:48.236843  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:48.228588   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.229356   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231115   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231743   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.233451   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:48.228588   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.229356   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231115   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.231743   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:48.233451   12537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:50.738489  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:50.749165  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:50.749232  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:50.774993  303437 cri.go:89] found id: ""
	I1210 07:11:50.775032  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.775042  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:50.775049  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:50.775108  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:50.800355  303437 cri.go:89] found id: ""
	I1210 07:11:50.800380  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.800389  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:50.800396  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:50.800455  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:50.825116  303437 cri.go:89] found id: ""
	I1210 07:11:50.825139  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.825148  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:50.825154  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:50.825216  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:50.852419  303437 cri.go:89] found id: ""
	I1210 07:11:50.852441  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.852449  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:50.852455  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:50.852513  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:50.877502  303437 cri.go:89] found id: ""
	I1210 07:11:50.877522  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.877531  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:50.877537  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:50.877594  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:50.905139  303437 cri.go:89] found id: ""
	I1210 07:11:50.905161  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.905171  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:50.905177  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:50.905237  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:50.933267  303437 cri.go:89] found id: ""
	I1210 07:11:50.933291  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.933299  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:50.933305  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:50.933364  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:50.961246  303437 cri.go:89] found id: ""
	I1210 07:11:50.961267  303437 logs.go:282] 0 containers: []
	W1210 07:11:50.961276  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:50.961285  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:50.961296  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:50.989123  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:50.989149  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:51.046128  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:51.046168  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:51.060977  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:51.061014  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:51.126917  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:51.119326   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.119907   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.121515   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.122000   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.123466   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:51.119326   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.119907   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.121515   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.122000   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:51.123466   12645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:51.126938  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:51.126951  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:53.652260  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:53.662761  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:53.662827  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:53.692655  303437 cri.go:89] found id: ""
	I1210 07:11:53.692728  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.692755  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:53.692773  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:53.692852  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:53.726710  303437 cri.go:89] found id: ""
	I1210 07:11:53.726743  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.726752  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:53.726758  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:53.726816  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:53.751772  303437 cri.go:89] found id: ""
	I1210 07:11:53.751793  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.751802  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:53.751808  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:53.751867  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:53.776281  303437 cri.go:89] found id: ""
	I1210 07:11:53.776347  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.776371  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:53.776391  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:53.776475  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:53.801234  303437 cri.go:89] found id: ""
	I1210 07:11:53.801259  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.801268  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:53.801275  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:53.801330  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:53.830240  303437 cri.go:89] found id: ""
	I1210 07:11:53.830265  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.830273  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:53.830280  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:53.830341  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:53.855035  303437 cri.go:89] found id: ""
	I1210 07:11:53.855059  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.855069  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:53.855075  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:53.855140  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:53.883359  303437 cri.go:89] found id: ""
	I1210 07:11:53.883384  303437 logs.go:282] 0 containers: []
	W1210 07:11:53.883401  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:53.883411  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:53.883423  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:53.923136  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:53.923215  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:53.985138  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:53.985172  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:53.999740  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:53.999775  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:54.066156  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:54.058409   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.059038   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.060575   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.061111   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.062782   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:54.058409   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.059038   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.060575   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.061111   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:54.062782   12755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:54.066181  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:54.066194  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:56.591475  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:56.601960  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:56.602033  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:56.626286  303437 cri.go:89] found id: ""
	I1210 07:11:56.626311  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.626320  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:56.626327  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:56.626385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:56.650098  303437 cri.go:89] found id: ""
	I1210 07:11:56.650124  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.650133  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:56.650139  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:56.650201  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:56.677542  303437 cri.go:89] found id: ""
	I1210 07:11:56.677569  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.677578  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:56.677584  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:56.677659  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:56.709405  303437 cri.go:89] found id: ""
	I1210 07:11:56.709430  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.709439  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:56.709446  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:56.709508  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:56.739179  303437 cri.go:89] found id: ""
	I1210 07:11:56.739204  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.739212  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:56.739219  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:56.739277  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:56.766584  303437 cri.go:89] found id: ""
	I1210 07:11:56.766609  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.766618  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:56.766624  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:56.766691  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:56.791703  303437 cri.go:89] found id: ""
	I1210 07:11:56.791729  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.791739  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:56.791745  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:56.791809  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:56.817298  303437 cri.go:89] found id: ""
	I1210 07:11:56.817325  303437 logs.go:282] 0 containers: []
	W1210 07:11:56.817334  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:56.817344  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:56.817355  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:56.875173  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:56.875210  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:56.889120  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:56.889146  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:56.984238  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:56.976825   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.977286   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.978762   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.979412   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.980881   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:56.976825   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.977286   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.978762   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.979412   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:56.980881   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:56.984258  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:56.984270  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:11:57.011593  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:57.011627  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:59.548660  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:59.559203  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:11:59.559272  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:11:59.584024  303437 cri.go:89] found id: ""
	I1210 07:11:59.584091  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.584113  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:11:59.584131  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:11:59.584223  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:11:59.609283  303437 cri.go:89] found id: ""
	I1210 07:11:59.609307  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.609316  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:11:59.609325  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:11:59.609385  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:11:59.633912  303437 cri.go:89] found id: ""
	I1210 07:11:59.633935  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.633944  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:11:59.633951  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:11:59.634012  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:11:59.660339  303437 cri.go:89] found id: ""
	I1210 07:11:59.660365  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.660373  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:11:59.660380  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:11:59.660437  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:11:59.697302  303437 cri.go:89] found id: ""
	I1210 07:11:59.697329  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.697342  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:59.697348  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:11:59.697410  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:11:59.733379  303437 cri.go:89] found id: ""
	I1210 07:11:59.733402  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.733411  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:11:59.733418  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:11:59.733488  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:11:59.758324  303437 cri.go:89] found id: ""
	I1210 07:11:59.758350  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.758360  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:59.758366  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:11:59.758423  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:11:59.788265  303437 cri.go:89] found id: ""
	I1210 07:11:59.788304  303437 logs.go:282] 0 containers: []
	W1210 07:11:59.788313  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:11:59.788323  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:11:59.788335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:59.816310  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:59.816335  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:59.875191  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:59.875227  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:59.888706  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:59.888737  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:59.964581  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:11:59.957369   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.958105   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959640   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959946   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.961394   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:11:59.957369   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.958105   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959640   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.959946   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:11:59.961394   12980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:59.964604  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:11:59.964617  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:02.490529  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:02.501579  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:02.501655  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:02.530852  303437 cri.go:89] found id: ""
	I1210 07:12:02.530876  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.530885  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:02.530894  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:02.530955  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:02.561336  303437 cri.go:89] found id: ""
	I1210 07:12:02.561361  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.561370  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:02.561377  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:02.561434  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:02.585933  303437 cri.go:89] found id: ""
	I1210 07:12:02.585963  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.585972  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:02.585979  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:02.586040  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:02.611097  303437 cri.go:89] found id: ""
	I1210 07:12:02.611122  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.611131  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:02.611137  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:02.611199  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:02.637900  303437 cri.go:89] found id: ""
	I1210 07:12:02.637925  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.637934  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:02.637941  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:02.638002  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:02.669431  303437 cri.go:89] found id: ""
	I1210 07:12:02.669457  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.669467  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:02.669474  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:02.669536  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:02.704940  303437 cri.go:89] found id: ""
	I1210 07:12:02.704967  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.704976  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:02.704983  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:02.705044  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:02.733218  303437 cri.go:89] found id: ""
	I1210 07:12:02.733241  303437 logs.go:282] 0 containers: []
	W1210 07:12:02.733251  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:02.733260  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:02.733271  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:02.791544  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:02.791580  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:02.805689  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:02.805716  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:02.873516  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:02.865755   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.866425   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868077   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868380   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.870008   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:02.865755   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.866425   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868077   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.868380   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:02.870008   13082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:02.873536  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:02.873548  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:02.898899  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:02.898932  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:05.445135  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:05.455827  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:05.455898  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:05.481329  303437 cri.go:89] found id: ""
	I1210 07:12:05.481352  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.481363  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:05.481370  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:05.481428  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:05.507339  303437 cri.go:89] found id: ""
	I1210 07:12:05.507362  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.507371  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:05.507378  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:05.507444  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:05.531971  303437 cri.go:89] found id: ""
	I1210 07:12:05.531995  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.532004  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:05.532010  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:05.532074  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:05.563046  303437 cri.go:89] found id: ""
	I1210 07:12:05.563069  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.563078  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:05.563084  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:05.563147  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:05.587778  303437 cri.go:89] found id: ""
	I1210 07:12:05.587801  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.587810  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:05.587816  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:05.587874  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:05.611952  303437 cri.go:89] found id: ""
	I1210 07:12:05.611973  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.611982  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:05.611988  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:05.612047  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:05.636683  303437 cri.go:89] found id: ""
	I1210 07:12:05.636705  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.636715  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:05.636721  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:05.636781  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:05.674580  303437 cri.go:89] found id: ""
	I1210 07:12:05.674609  303437 logs.go:282] 0 containers: []
	W1210 07:12:05.674619  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:05.674628  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:05.674640  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:05.690150  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:05.690176  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:05.761058  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:05.753113   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.753757   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.755406   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.756072   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.757763   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:05.753113   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.753757   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.755406   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.756072   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:05.757763   13199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:05.761078  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:05.761090  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:05.786479  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:05.786515  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:05.814400  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:05.814426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:08.372748  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:08.382940  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:12:08.383032  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:12:08.406822  303437 cri.go:89] found id: ""
	I1210 07:12:08.406851  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.406860  303437 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:12:08.406867  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1210 07:12:08.406931  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:12:08.431746  303437 cri.go:89] found id: ""
	I1210 07:12:08.431775  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.431786  303437 logs.go:284] No container was found matching "etcd"
	I1210 07:12:08.431795  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1210 07:12:08.431857  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:12:08.456129  303437 cri.go:89] found id: ""
	I1210 07:12:08.456152  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.456161  303437 logs.go:284] No container was found matching "coredns"
	I1210 07:12:08.456167  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:12:08.456226  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:12:08.481945  303437 cri.go:89] found id: ""
	I1210 07:12:08.481981  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.481990  303437 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:12:08.481997  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:12:08.482070  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:12:08.511057  303437 cri.go:89] found id: ""
	I1210 07:12:08.511080  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.511089  303437 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:08.511095  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:12:08.511165  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:12:08.537072  303437 cri.go:89] found id: ""
	I1210 07:12:08.537094  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.537106  303437 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:12:08.537113  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1210 07:12:08.537188  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:12:08.562930  303437 cri.go:89] found id: ""
	I1210 07:12:08.562961  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.562970  303437 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:08.562992  303437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 07:12:08.563116  303437 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 07:12:08.587421  303437 cri.go:89] found id: ""
	I1210 07:12:08.587446  303437 logs.go:282] 0 containers: []
	W1210 07:12:08.587455  303437 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:12:08.587464  303437 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:08.587501  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:08.646970  303437 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:08.647003  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:08.661398  303437 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:08.661426  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:08.746222  303437 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:08.738312   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.739170   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.740871   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.741378   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.742946   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:12:08.738312   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.739170   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.740871   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.741378   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:08.742946   13312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:08.746254  303437 logs.go:123] Gathering logs for containerd ...
	I1210 07:12:08.746267  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1210 07:12:08.772476  303437 logs.go:123] Gathering logs for container status ...
	I1210 07:12:08.772510  303437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:11.303459  303437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:11.315726  303437 out.go:203] 
	W1210 07:12:11.316890  303437 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:12:11.316924  303437 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:12:11.316933  303437 out.go:285] * Related issues:
	W1210 07:12:11.316946  303437 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:12:11.316957  303437 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:12:11.318146  303437 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229542174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229558412Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229590757Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229604525Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229613715Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229623348Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229633022Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229642441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229657818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229687390Z" level=info msg="Connect containerd service"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.229958744Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.230529901Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.250111138Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.250206229Z" level=info msg="Start recovering state"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.250507327Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.251405174Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273418724Z" level=info msg="Start event monitor"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273477383Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273488378Z" level=info msg="Start streaming server"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273499069Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273508768Z" level=info msg="runtime interface starting up..."
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273515496Z" level=info msg="starting plugins..."
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273546668Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:06:07 newest-cni-168808 containerd[554]: time="2025-12-10T07:06:07.273837124Z" level=info msg="containerd successfully booted in 0.065786s"
	Dec 10 07:06:07 newest-cni-168808 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:12:24.777417   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:24.778098   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:24.779827   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:24.780328   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:12:24.781884   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:12:24 up  1:54,  0 user,  load average: 0.44, 0.50, 1.05
	Linux newest-cni-168808 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:12:21 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:21 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
	Dec 10 07:12:21 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:21 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:21 newest-cni-168808 kubelet[13836]: E1210 07:12:21.968257   13836 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:21 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:21 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:22 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 10 07:12:22 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:22 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:22 newest-cni-168808 kubelet[13842]: E1210 07:12:22.729167   13842 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:22 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:22 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:23 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 10 07:12:23 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:23 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:23 newest-cni-168808 kubelet[13878]: E1210 07:12:23.472068   13878 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:23 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:23 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:12:24 newest-cni-168808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 10 07:12:24 newest-cni-168808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:24 newest-cni-168808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:12:24 newest-cni-168808 kubelet[13884]: E1210 07:12:24.232845   13884 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:12:24 newest-cni-168808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:12:24 newest-cni-168808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-168808 -n newest-cni-168808: exit status 2 (385.721323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-168808" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (267.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:16:27.652245    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:16:38.876415    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:16:44.570930    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:18:01.950540    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:18:28.593971    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:18:28.600327    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:18:28.611688    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:18:28.633100    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:18:28.674518    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:18:28.756021    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:18:28.917537    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:18:29.239279    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:18:29.880839    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:18:31.162346    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:18:33.724227    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:18:37.013607    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:18:38.845794    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1210 07:18:39.531349    4116 config.go:182] Loaded profile config "enable-default-cni-225109": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:18:40.888388    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:18:49.087920    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:19:09.569980    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:19:19.662536    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/default-k8s-diff-port-395269/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:19:50.531653    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/auto-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:19:59.749507    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:19:59.756049    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:19:59.767418    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:19:59.789075    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:19:59.830533    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:19:59.911943    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:20:00.073442    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:20:00.395380    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:20:01.037574    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:20:02.319254    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:20:04.880628    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 07:20:10.001945    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236: exit status 2 (320.480025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-320236" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-320236 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-320236 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.068µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-320236 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-320236
helpers_test.go:244: (dbg) docker inspect no-preload-320236:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	        "Created": "2025-12-10T06:50:11.529745127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296159,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:00:31.906944272Z",
	            "FinishedAt": "2025-12-10T07:00:30.524095791Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hostname",
	        "HostsPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/hosts",
	        "LogPath": "/var/lib/docker/containers/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df/afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df-json.log",
	        "Name": "/no-preload-320236",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-320236:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-320236",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "afeff1ab0e38b5966dbae0670f82b3fbdc7a29047ebb873ecd3e5b111359f0df",
	                "LowerDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a-init/diff:/var/lib/docker/overlay2/911aa86fb7d7c3315140a65752de758f6336d44a8c0b9fea5c4dce0c2e352c7b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/028adb9e5a1761ac61f66afe550e5fb7744b2ce2a6e65f91187f258944960f9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-320236",
	                "Source": "/var/lib/docker/volumes/no-preload-320236/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-320236",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-320236",
	                "name.minikube.sigs.k8s.io": "no-preload-320236",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be5eb1503ed127ef0c2d044ffb245c38ab2a7657e10a797a5912ae4059c29e3f",
	            "SandboxKey": "/var/run/docker/netns/be5eb1503ed1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-320236": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:26:8b:69:77:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ffc647756423b4e81a1338ec5e1d5f1765ed1034ce5ae186c5fbcc84bf8cb09",
	                    "EndpointID": "31d9f19780654066d5dbb87109e480cce007c3d0fa04a397a4cec6b92d85ea58",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-320236",
	                        "afeff1ab0e38"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236: exit status 2 (338.287098ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-320236 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-225109 sudo systemctl status kubelet --all --full --no-pager                                                           │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo systemctl cat kubelet --no-pager                                                                           │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo journalctl -xeu kubelet --all --full --no-pager                                                            │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo cat /etc/kubernetes/kubelet.conf                                                                           │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo cat /var/lib/kubelet/config.yaml                                                                           │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo systemctl status docker --all --full --no-pager                                                            │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │                     │
	│ ssh     │ -p enable-default-cni-225109 sudo systemctl cat docker --no-pager                                                                            │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo cat /etc/docker/daemon.json                                                                                │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │                     │
	│ ssh     │ -p enable-default-cni-225109 sudo docker system info                                                                                         │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │                     │
	│ ssh     │ -p enable-default-cni-225109 sudo systemctl status cri-docker --all --full --no-pager                                                        │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │                     │
	│ ssh     │ -p enable-default-cni-225109 sudo systemctl cat cri-docker --no-pager                                                                        │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                   │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │                     │
	│ ssh     │ -p enable-default-cni-225109 sudo cat /usr/lib/systemd/system/cri-docker.service                                                             │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo cri-dockerd --version                                                                                      │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo systemctl status containerd --all --full --no-pager                                                        │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo systemctl cat containerd --no-pager                                                                        │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo cat /lib/systemd/system/containerd.service                                                                 │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo cat /etc/containerd/config.toml                                                                            │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo containerd config dump                                                                                     │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo systemctl status crio --all --full --no-pager                                                              │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │                     │
	│ ssh     │ -p enable-default-cni-225109 sudo systemctl cat crio --no-pager                                                                              │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                    │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ ssh     │ -p enable-default-cni-225109 sudo crio config                                                                                                │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ delete  │ -p enable-default-cni-225109                                                                                                                 │ enable-default-cni-225109 │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │ 10 Dec 25 07:19 UTC │
	│ start   │ -p bridge-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd │ bridge-225109             │ jenkins │ v1.37.0 │ 10 Dec 25 07:19 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:19:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:19:10.905796  352848 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:19:10.905915  352848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:19:10.905927  352848 out.go:374] Setting ErrFile to fd 2...
	I1210 07:19:10.905933  352848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:19:10.906211  352848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 07:19:10.906654  352848 out.go:368] Setting JSON to false
	I1210 07:19:10.907655  352848 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7301,"bootTime":1765343850,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:19:10.907729  352848 start.go:143] virtualization:  
	I1210 07:19:10.913971  352848 out.go:179] * [bridge-225109] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:19:10.917529  352848 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:19:10.917564  352848 notify.go:221] Checking for updates...
	I1210 07:19:10.924256  352848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:19:10.927518  352848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:19:10.930703  352848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 07:19:10.933779  352848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:19:10.936946  352848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:19:10.940494  352848 config.go:182] Loaded profile config "no-preload-320236": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 07:19:10.940603  352848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:19:10.968536  352848 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:19:10.968647  352848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:19:11.026172  352848 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:19:11.016688816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:19:11.026287  352848 docker.go:319] overlay module found
	I1210 07:19:11.029530  352848 out.go:179] * Using the docker driver based on user configuration
	I1210 07:19:11.032557  352848 start.go:309] selected driver: docker
	I1210 07:19:11.032580  352848 start.go:927] validating driver "docker" against <nil>
	I1210 07:19:11.032594  352848 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:19:11.033321  352848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:19:11.086782  352848 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:19:11.076279914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:19:11.086960  352848 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:19:11.087272  352848 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:19:11.090211  352848 out.go:179] * Using Docker driver with root privileges
	I1210 07:19:11.093193  352848 cni.go:84] Creating CNI manager for "bridge"
	I1210 07:19:11.093224  352848 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 07:19:11.093320  352848 start.go:353] cluster config:
	{Name:bridge-225109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-225109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:19:11.096497  352848 out.go:179] * Starting "bridge-225109" primary control-plane node in "bridge-225109" cluster
	I1210 07:19:11.099367  352848 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 07:19:11.102322  352848 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:19:11.105247  352848 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1210 07:19:11.105347  352848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:19:11.124828  352848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:19:11.124851  352848 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:19:11.161254  352848 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1210 07:19:11.332376  352848 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1210 07:19:11.332555  352848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/config.json ...
	I1210 07:19:11.332594  352848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/config.json: {Name:mkea345c2351662eafb7a0d5a379d88e89464eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:11.332787  352848 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:19:11.332806  352848 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:19:11.332815  352848 start.go:360] acquireMachinesLock for bridge-225109: {Name:mkdab863a13e0dcfcda5cb683467b7cbf083855f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:19:11.332870  352848 start.go:364] duration metric: took 42.971µs to acquireMachinesLock for "bridge-225109"
	I1210 07:19:11.332889  352848 start.go:93] Provisioning new machine with config: &{Name:bridge-225109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-225109 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:19:11.332945  352848 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:19:11.336350  352848 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:19:11.336586  352848 start.go:159] libmachine.API.Create for "bridge-225109" (driver="docker")
	I1210 07:19:11.336615  352848 client.go:173] LocalClient.Create starting
	I1210 07:19:11.336670  352848 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem
	I1210 07:19:11.336702  352848 main.go:143] libmachine: Decoding PEM data...
	I1210 07:19:11.336719  352848 main.go:143] libmachine: Parsing certificate...
	I1210 07:19:11.336769  352848 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem
	I1210 07:19:11.336785  352848 main.go:143] libmachine: Decoding PEM data...
	I1210 07:19:11.336801  352848 main.go:143] libmachine: Parsing certificate...
	I1210 07:19:11.337137  352848 cli_runner.go:164] Run: docker network inspect bridge-225109 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:19:11.370005  352848 cli_runner.go:211] docker network inspect bridge-225109 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:19:11.370090  352848 network_create.go:284] running [docker network inspect bridge-225109] to gather additional debugging logs...
	I1210 07:19:11.370106  352848 cli_runner.go:164] Run: docker network inspect bridge-225109
	W1210 07:19:11.385175  352848 cli_runner.go:211] docker network inspect bridge-225109 returned with exit code 1
	I1210 07:19:11.385220  352848 network_create.go:287] error running [docker network inspect bridge-225109]: docker network inspect bridge-225109: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-225109 not found
	I1210 07:19:11.385235  352848 network_create.go:289] output of [docker network inspect bridge-225109]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-225109 not found
	
	** /stderr **
	I1210 07:19:11.385325  352848 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:19:11.411125  352848 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d091f932c27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:e7:11:3f:d3:8a} reservation:<nil>}
	I1210 07:19:11.411463  352848 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b6dea00dc193 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:ca:73:41:95:cc} reservation:<nil>}
	I1210 07:19:11.411792  352848 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ab13536e79d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:47:e5:9e:c6:4a} reservation:<nil>}
	I1210 07:19:11.412211  352848 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2ae90}
	I1210 07:19:11.412232  352848 network_create.go:124] attempt to create docker network bridge-225109 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:19:11.412289  352848 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-225109 bridge-225109
	I1210 07:19:11.471287  352848 network_create.go:108] docker network bridge-225109 192.168.76.0/24 created
	I1210 07:19:11.471327  352848 kic.go:121] calculated static IP "192.168.76.2" for the "bridge-225109" container
	I1210 07:19:11.471399  352848 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:19:11.487247  352848 cli_runner.go:164] Run: docker volume create bridge-225109 --label name.minikube.sigs.k8s.io=bridge-225109 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:19:11.504588  352848 oci.go:103] Successfully created a docker volume bridge-225109
	I1210 07:19:11.504688  352848 cli_runner.go:164] Run: docker run --rm --name bridge-225109-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-225109 --entrypoint /usr/bin/test -v bridge-225109:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:19:11.510333  352848 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:19:11.690600  352848 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:19:11.865087  352848 cache.go:107] acquiring lock: {Name:mk7e5e37b00e9a9dad987c3cbf2d14fb3e085217 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:19:11.865217  352848 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 07:19:11.865228  352848 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 159.329µs
	I1210 07:19:11.865236  352848 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 07:19:11.865248  352848 cache.go:107] acquiring lock: {Name:mkeb1fa8dab49600ef80d840b464bd8533c4cb6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:19:11.865280  352848 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 exists
	I1210 07:19:11.865285  352848 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3" took 39.024µs
	I1210 07:19:11.865291  352848 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 succeeded
	I1210 07:19:11.865300  352848 cache.go:107] acquiring lock: {Name:mkb1c8b0d22db746576a3ea57ea1cd2bf308d320 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:19:11.865329  352848 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 exists
	I1210 07:19:11.865335  352848 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3" took 35.75µs
	I1210 07:19:11.865341  352848 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 succeeded
	I1210 07:19:11.865352  352848 cache.go:107] acquiring lock: {Name:mk1c8262b3af50ea9f0658e134d5d1e45690c2ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:19:11.865377  352848 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 exists
	I1210 07:19:11.865382  352848 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3" took 33.084µs
	I1210 07:19:11.865393  352848 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 succeeded
	I1210 07:19:11.865402  352848 cache.go:107] acquiring lock: {Name:mkf076b1a6306c7ead02f620a535f4dce2be2a45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:19:11.865426  352848 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 exists
	I1210 07:19:11.865431  352848 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3" took 29.932µs
	I1210 07:19:11.865436  352848 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 succeeded
	I1210 07:19:11.865444  352848 cache.go:107] acquiring lock: {Name:mk96e0a0b4216268ca66b2800d3e0b394c911f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:19:11.865468  352848 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1210 07:19:11.865474  352848 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.08µs
	I1210 07:19:11.865479  352848 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1210 07:19:11.865488  352848 cache.go:107] acquiring lock: {Name:mk49179ee96b27fc020a2438a2984fba8f050e2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:19:11.865512  352848 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1210 07:19:11.865516  352848 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 29.227µs
	I1210 07:19:11.865522  352848 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1210 07:19:11.865530  352848 cache.go:107] acquiring lock: {Name:mk8ce68d2a56a7659694e14d150cebfb6fc3181f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:19:11.865567  352848 cache.go:115] /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1210 07:19:11.865572  352848 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 42.79µs
	I1210 07:19:11.865579  352848 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1210 07:19:11.865585  352848 cache.go:87] Successfully saved all images to host disk.
	I1210 07:19:12.057502  352848 oci.go:107] Successfully prepared a docker volume bridge-225109
	I1210 07:19:12.057563  352848 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	W1210 07:19:12.057705  352848 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:19:12.057817  352848 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:19:12.115497  352848 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-225109 --name bridge-225109 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-225109 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-225109 --network bridge-225109 --ip 192.168.76.2 --volume bridge-225109:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:19:12.438283  352848 cli_runner.go:164] Run: docker container inspect bridge-225109 --format={{.State.Running}}
	I1210 07:19:12.461547  352848 cli_runner.go:164] Run: docker container inspect bridge-225109 --format={{.State.Status}}
	I1210 07:19:12.484843  352848 cli_runner.go:164] Run: docker exec bridge-225109 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:19:12.535486  352848 oci.go:144] the created container "bridge-225109" has a running status.
	I1210 07:19:12.535517  352848 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/bridge-225109/id_rsa...
	I1210 07:19:12.826049  352848 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22094-2307/.minikube/machines/bridge-225109/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:19:12.862031  352848 cli_runner.go:164] Run: docker container inspect bridge-225109 --format={{.State.Status}}
	I1210 07:19:12.890585  352848 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:19:12.890606  352848 kic_runner.go:114] Args: [docker exec --privileged bridge-225109 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:19:12.940520  352848 cli_runner.go:164] Run: docker container inspect bridge-225109 --format={{.State.Status}}
	I1210 07:19:12.963325  352848 machine.go:94] provisionDockerMachine start ...
	I1210 07:19:12.963426  352848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-225109
	I1210 07:19:12.983770  352848 main.go:143] libmachine: Using SSH client type: native
	I1210 07:19:12.984109  352848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1210 07:19:12.984126  352848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:19:12.984800  352848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50974->127.0.0.1:33128: read: connection reset by peer
	I1210 07:19:16.138680  352848 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-225109
	
	I1210 07:19:16.138703  352848 ubuntu.go:182] provisioning hostname "bridge-225109"
	I1210 07:19:16.138766  352848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-225109
	I1210 07:19:16.157028  352848 main.go:143] libmachine: Using SSH client type: native
	I1210 07:19:16.157369  352848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1210 07:19:16.157386  352848 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-225109 && echo "bridge-225109" | sudo tee /etc/hostname
	I1210 07:19:16.324058  352848 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-225109
	
	I1210 07:19:16.324199  352848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-225109
	I1210 07:19:16.341274  352848 main.go:143] libmachine: Using SSH client type: native
	I1210 07:19:16.341597  352848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1210 07:19:16.341613  352848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-225109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-225109/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-225109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:19:16.495570  352848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:19:16.495618  352848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22094-2307/.minikube CaCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22094-2307/.minikube}
	I1210 07:19:16.495642  352848 ubuntu.go:190] setting up certificates
	I1210 07:19:16.495661  352848 provision.go:84] configureAuth start
	I1210 07:19:16.495759  352848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-225109
	I1210 07:19:16.515082  352848 provision.go:143] copyHostCerts
	I1210 07:19:16.515162  352848 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem, removing ...
	I1210 07:19:16.515174  352848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem
	I1210 07:19:16.515251  352848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/ca.pem (1078 bytes)
	I1210 07:19:16.515357  352848 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem, removing ...
	I1210 07:19:16.515368  352848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem
	I1210 07:19:16.515397  352848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/cert.pem (1123 bytes)
	I1210 07:19:16.515454  352848 exec_runner.go:144] found /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem, removing ...
	I1210 07:19:16.515463  352848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem
	I1210 07:19:16.515487  352848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22094-2307/.minikube/key.pem (1675 bytes)
	I1210 07:19:16.515535  352848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem org=jenkins.bridge-225109 san=[127.0.0.1 192.168.76.2 bridge-225109 localhost minikube]
	I1210 07:19:16.764022  352848 provision.go:177] copyRemoteCerts
	I1210 07:19:16.764093  352848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:19:16.764132  352848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-225109
	I1210 07:19:16.781730  352848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/bridge-225109/id_rsa Username:docker}
	I1210 07:19:16.891424  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:19:16.908830  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 07:19:16.926417  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:19:16.943794  352848 provision.go:87] duration metric: took 448.098279ms to configureAuth
	I1210 07:19:16.943870  352848 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:19:16.944069  352848 config.go:182] Loaded profile config "bridge-225109": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1210 07:19:16.944085  352848 machine.go:97] duration metric: took 3.980740792s to provisionDockerMachine
	I1210 07:19:16.944093  352848 client.go:176] duration metric: took 5.607471504s to LocalClient.Create
	I1210 07:19:16.944117  352848 start.go:167] duration metric: took 5.607527218s to libmachine.API.Create "bridge-225109"
	I1210 07:19:16.944127  352848 start.go:293] postStartSetup for "bridge-225109" (driver="docker")
	I1210 07:19:16.944139  352848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:19:16.944189  352848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:19:16.944235  352848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-225109
	I1210 07:19:16.961067  352848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/bridge-225109/id_rsa Username:docker}
	I1210 07:19:17.067164  352848 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:19:17.070523  352848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:19:17.070554  352848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:19:17.070566  352848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/addons for local assets ...
	I1210 07:19:17.070619  352848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22094-2307/.minikube/files for local assets ...
	I1210 07:19:17.070699  352848 filesync.go:149] local asset: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1210 07:19:17.070804  352848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:19:17.078237  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:19:17.096496  352848 start.go:296] duration metric: took 152.350537ms for postStartSetup
	I1210 07:19:17.096867  352848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-225109
	I1210 07:19:17.114295  352848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/config.json ...
	I1210 07:19:17.114658  352848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:19:17.114718  352848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-225109
	I1210 07:19:17.133180  352848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/bridge-225109/id_rsa Username:docker}
	I1210 07:19:17.244612  352848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:19:17.249605  352848 start.go:128] duration metric: took 5.916645972s to createHost
	I1210 07:19:17.249632  352848 start.go:83] releasing machines lock for "bridge-225109", held for 5.916752862s
	I1210 07:19:17.249714  352848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-225109
	I1210 07:19:17.267434  352848 ssh_runner.go:195] Run: cat /version.json
	I1210 07:19:17.267482  352848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:19:17.267491  352848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-225109
	I1210 07:19:17.267547  352848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-225109
	I1210 07:19:17.292435  352848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/bridge-225109/id_rsa Username:docker}
	I1210 07:19:17.295314  352848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/bridge-225109/id_rsa Username:docker}
	I1210 07:19:17.394660  352848 ssh_runner.go:195] Run: systemctl --version
	I1210 07:19:17.487350  352848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:19:17.491721  352848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:19:17.491827  352848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:19:17.519754  352848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:19:17.519774  352848 start.go:496] detecting cgroup driver to use...
	I1210 07:19:17.519803  352848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:19:17.519851  352848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1210 07:19:17.534688  352848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:19:17.547669  352848 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:19:17.547802  352848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:19:17.565069  352848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:19:17.583739  352848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:19:17.706840  352848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:19:17.824128  352848 docker.go:234] disabling docker service ...
	I1210 07:19:17.824234  352848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:19:17.851838  352848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:19:17.866042  352848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:19:17.997926  352848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:19:18.120212  352848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:19:18.133917  352848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:19:18.149357  352848 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:19:18.298209  352848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:19:18.306959  352848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:19:18.315363  352848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:19:18.315431  352848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:19:18.324020  352848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:19:18.332743  352848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:19:18.341310  352848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:19:18.350036  352848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:19:18.358153  352848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:19:18.366933  352848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:19:18.375863  352848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:19:18.384333  352848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:19:18.391698  352848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:19:18.398779  352848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:19:18.507381  352848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:19:18.605547  352848 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1210 07:19:18.605661  352848 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1210 07:19:18.609568  352848 start.go:564] Will wait 60s for crictl version
	I1210 07:19:18.609638  352848 ssh_runner.go:195] Run: which crictl
	I1210 07:19:18.613375  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:19:18.638359  352848 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1210 07:19:18.638447  352848 ssh_runner.go:195] Run: containerd --version
	I1210 07:19:18.658773  352848 ssh_runner.go:195] Run: containerd --version
	I1210 07:19:18.689320  352848 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1210 07:19:18.692222  352848 cli_runner.go:164] Run: docker network inspect bridge-225109 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:19:18.715139  352848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 07:19:18.720646  352848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:19:18.732869  352848 kubeadm.go:884] updating cluster {Name:bridge-225109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-225109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:19:18.733077  352848 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:19:18.884606  352848 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:19:19.034759  352848 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:19:19.207667  352848 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1210 07:19:19.207761  352848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:19:19.233336  352848 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1210 07:19:19.233357  352848 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:19:19.233409  352848 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:19:19.233615  352848 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:19:19.233700  352848 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:19:19.233789  352848 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:19:19.233891  352848 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:19:19.233981  352848 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:19:19.234072  352848 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:19:19.234164  352848 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:19:19.236080  352848 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:19:19.236392  352848 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:19:19.236182  352848 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:19:19.236226  352848 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:19:19.236270  352848 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:19:19.236306  352848 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:19:19.236338  352848 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:19:19.236345  352848 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:19:19.550473  352848 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1210 07:19:19.550571  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1210 07:19:19.551609  352848 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.3" and sha "2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6"
	I1210 07:19:19.551759  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:19:19.555522  352848 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1210 07:19:19.555584  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1210 07:19:19.558681  352848 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc"
	I1210 07:19:19.558747  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:19:19.561451  352848 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.3" and sha "7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22"
	I1210 07:19:19.561551  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:19:19.562403  352848 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.3" and sha "4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162"
	I1210 07:19:19.562486  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:19:19.564303  352848 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.3" and sha "cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896"
	I1210 07:19:19.564393  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:19:19.591384  352848 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1210 07:19:19.591486  352848 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:19:19.591564  352848 ssh_runner.go:195] Run: which crictl
	I1210 07:19:19.622920  352848 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6" in container runtime
	I1210 07:19:19.623001  352848 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:19:19.623082  352848 ssh_runner.go:195] Run: which crictl
	I1210 07:19:19.638548  352848 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1210 07:19:19.638768  352848 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:19:19.638807  352848 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162" in container runtime
	I1210 07:19:19.638839  352848 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:19:19.638882  352848 ssh_runner.go:195] Run: which crictl
	I1210 07:19:19.638883  352848 ssh_runner.go:195] Run: which crictl
	I1210 07:19:19.638737  352848 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22" in container runtime
	I1210 07:19:19.638975  352848 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:19:19.638996  352848 ssh_runner.go:195] Run: which crictl
	I1210 07:19:19.638999  352848 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896" in container runtime
	I1210 07:19:19.639067  352848 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:19:19.639090  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:19:19.639136  352848 ssh_runner.go:195] Run: which crictl
	I1210 07:19:19.638667  352848 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1210 07:19:19.639221  352848 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:19:19.639150  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:19:19.639302  352848 ssh_runner.go:195] Run: which crictl
	I1210 07:19:19.679422  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:19:19.679510  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:19:19.679511  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:19:19.679626  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:19:19.679690  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:19:19.679761  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:19:19.679762  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:19:19.783694  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:19:19.783786  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:19:19.783845  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:19:19.783896  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:19:19.783956  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:19:19.783986  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1210 07:19:19.784019  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:19:19.880020  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:19:19.880162  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:19:19.880221  352848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1210 07:19:19.880292  352848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:19:19.885980  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:19:19.886055  352848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3
	I1210 07:19:19.886225  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:19:19.886343  352848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:19:19.886416  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:19:19.943069  352848 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:19:19.943103  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1210 07:19:19.943177  352848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3
	I1210 07:19:19.943252  352848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:19:19.943309  352848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1210 07:19:19.943365  352848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:19:19.943424  352848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1210 07:19:19.943469  352848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:19:19.978159  352848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3
	I1210 07:19:19.978356  352848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3
	I1210 07:19:19.978400  352848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:19:19.978431  352848 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 07:19:19.978724  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (15787008 bytes)
	I1210 07:19:19.978492  352848 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:19:19.978793  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1210 07:19:19.978518  352848 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 07:19:19.978828  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (22806528 bytes)
	I1210 07:19:19.978549  352848 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 07:19:19.978858  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1210 07:19:19.978612  352848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:19:19.979804  352848 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:19:19.979888  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1210 07:19:20.018770  352848 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 07:19:20.018794  352848 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 07:19:20.018883  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (24578048 bytes)
	I1210 07:19:20.018888  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (20730880 bytes)
	I1210 07:19:20.230257  352848 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1210 07:19:20.475170  352848 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:19:20.475299  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.3
	W1210 07:19:20.554352  352848 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1210 07:19:20.554526  352848 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1210 07:19:20.554614  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:19:21.851362  352848 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.3: (1.376006603s)
	I1210 07:19:21.851441  352848 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.3 from cache
	I1210 07:19:21.851446  352848 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.296803591s)
	I1210 07:19:21.851494  352848 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1210 07:19:21.851517  352848 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:19:21.851606  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:19:21.851520  352848 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:19:21.851757  352848 ssh_runner.go:195] Run: which crictl
	I1210 07:19:23.232730  352848 ssh_runner.go:235] Completed: which crictl: (1.380922575s)
	I1210 07:19:23.232799  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:19:23.232868  352848 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.381227661s)
	I1210 07:19:23.232882  352848 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1210 07:19:23.232903  352848 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:19:23.232930  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:19:24.701680  352848 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.468724464s)
	I1210 07:19:24.701719  352848 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1210 07:19:24.701737  352848 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:19:24.701796  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:19:24.701881  352848 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.469067917s)
	I1210 07:19:24.701919  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:19:25.759867  352848 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.3: (1.058048277s)
	I1210 07:19:25.759890  352848 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.3 from cache
	I1210 07:19:25.759906  352848 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:19:25.759951  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:19:25.760050  352848 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.057879725s)
	I1210 07:19:25.760098  352848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:19:26.718633  352848 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.3 from cache
	I1210 07:19:26.718668  352848 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:19:26.718718  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:19:26.718811  352848 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 07:19:26.718878  352848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:19:26.726947  352848 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:19:26.726981  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1210 07:19:27.844578  352848 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.3: (1.125837196s)
	I1210 07:19:27.844657  352848 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.3 from cache
	I1210 07:19:27.844695  352848 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:19:27.844772  352848 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:19:28.261984  352848 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22094-2307/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 07:19:28.262020  352848 cache_images.go:125] Successfully loaded all cached images
	I1210 07:19:28.262026  352848 cache_images.go:94] duration metric: took 9.028653527s to LoadCachedImages
	I1210 07:19:28.262061  352848 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 containerd true true} ...
	I1210 07:19:28.262181  352848 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-225109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:bridge-225109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1210 07:19:28.262264  352848 ssh_runner.go:195] Run: sudo crictl info
	I1210 07:19:28.288368  352848 cni.go:84] Creating CNI manager for "bridge"
	I1210 07:19:28.288398  352848 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:19:28.288422  352848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-225109 NodeName:bridge-225109 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:19:28.288540  352848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "bridge-225109"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:19:28.288615  352848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:19:28.296548  352848 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 07:19:28.296657  352848 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 07:19:28.304299  352848 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256
	I1210 07:19:28.304383  352848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 07:19:28.304468  352848 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubelet.sha256
	I1210 07:19:28.304497  352848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:19:28.304572  352848 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
	I1210 07:19:28.304633  352848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 07:19:28.309582  352848 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 07:19:28.309617  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (58130616 bytes)
	I1210 07:19:28.325558  352848 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 07:19:28.325594  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (71434424 bytes)
	I1210 07:19:28.325717  352848 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 07:19:28.349337  352848 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 07:19:28.349379  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (56426788 bytes)
	I1210 07:19:29.136818  352848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:19:29.146034  352848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 07:19:29.164242  352848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:19:29.177684  352848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1210 07:19:29.190997  352848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:19:29.195270  352848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:19:29.211612  352848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:19:29.336536  352848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:19:29.354236  352848 certs.go:69] Setting up /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109 for IP: 192.168.76.2
	I1210 07:19:29.354258  352848 certs.go:195] generating shared ca certs ...
	I1210 07:19:29.354274  352848 certs.go:227] acquiring lock for ca certs: {Name:mk43e7e192d9b1a5f363ea3a89db453126eede8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:29.354427  352848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key
	I1210 07:19:29.354486  352848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key
	I1210 07:19:29.354499  352848 certs.go:257] generating profile certs ...
	I1210 07:19:29.354560  352848 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/client.key
	I1210 07:19:29.354575  352848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/client.crt with IP's: []
	I1210 07:19:29.553094  352848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/client.crt ...
	I1210 07:19:29.553131  352848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/client.crt: {Name:mk4048afe0e97b95477cc4bcd8e77238e98b16e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:29.553364  352848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/client.key ...
	I1210 07:19:29.553380  352848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/client.key: {Name:mk14d4cca9922a81114258b9e96aa6f4e66ac56c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:29.553478  352848 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.key.e57121b7
	I1210 07:19:29.553495  352848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.crt.e57121b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 07:19:29.803170  352848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.crt.e57121b7 ...
	I1210 07:19:29.803199  352848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.crt.e57121b7: {Name:mkc7adad12e2388dff60e10c4830ea036033dc7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:29.803368  352848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.key.e57121b7 ...
	I1210 07:19:29.803381  352848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.key.e57121b7: {Name:mk5f11fc688366ea62579ec8ecd34d515e0b9806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:29.803460  352848 certs.go:382] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.crt.e57121b7 -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.crt
	I1210 07:19:29.803542  352848 certs.go:386] copying /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.key.e57121b7 -> /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.key
	I1210 07:19:29.803610  352848 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/proxy-client.key
	I1210 07:19:29.803626  352848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/proxy-client.crt with IP's: []
	I1210 07:19:30.142528  352848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/proxy-client.crt ...
	I1210 07:19:30.142562  352848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/proxy-client.crt: {Name:mk9161c5dc4f8e20ac55b2fdd89ae74fa2658481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:30.142758  352848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/proxy-client.key ...
	I1210 07:19:30.142770  352848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/proxy-client.key: {Name:mk94e20e4d92336407b85cc7a52692f3029ead68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:30.142970  352848 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem (1338 bytes)
	W1210 07:19:30.143047  352848 certs.go:480] ignoring /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1210 07:19:30.143063  352848 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:19:30.143102  352848 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/ca.pem (1078 bytes)
	I1210 07:19:30.143136  352848 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:19:30.143164  352848 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/certs/key.pem (1675 bytes)
	I1210 07:19:30.143219  352848 certs.go:484] found cert: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1210 07:19:30.143926  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:19:30.174796  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:19:30.194817  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:19:30.214595  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 07:19:30.238493  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 07:19:30.258748  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:19:30.278857  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:19:30.297040  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/bridge-225109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:19:30.315193  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1210 07:19:30.333646  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1210 07:19:30.351842  352848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:19:30.369545  352848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:19:30.382873  352848 ssh_runner.go:195] Run: openssl version
	I1210 07:19:30.389441  352848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:19:30.397060  352848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:19:30.404983  352848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:19:30.409312  352848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:29 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:19:30.409398  352848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:19:30.451415  352848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:19:30.459276  352848 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:19:30.467056  352848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4116.pem
	I1210 07:19:30.474860  352848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem
	I1210 07:19:30.482710  352848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1210 07:19:30.486798  352848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:39 /usr/share/ca-certificates/4116.pem
	I1210 07:19:30.486863  352848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1210 07:19:30.528398  352848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:19:30.536288  352848 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0
	I1210 07:19:30.544013  352848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41162.pem
	I1210 07:19:30.551515  352848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem
	I1210 07:19:30.559512  352848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1210 07:19:30.563889  352848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:39 /usr/share/ca-certificates/41162.pem
	I1210 07:19:30.564001  352848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1210 07:19:30.605084  352848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:19:30.612995  352848 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:19:30.620686  352848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:19:30.624724  352848 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:19:30.624773  352848 kubeadm.go:401] StartCluster: {Name:bridge-225109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-225109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:19:30.624855  352848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1210 07:19:30.624911  352848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:19:30.650977  352848 cri.go:89] found id: ""
	I1210 07:19:30.651073  352848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:19:30.662790  352848 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:19:30.671236  352848 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:19:30.671298  352848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:19:30.681887  352848 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:19:30.681907  352848 kubeadm.go:158] found existing configuration files:
	
	I1210 07:19:30.681958  352848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:19:30.690399  352848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:19:30.690471  352848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:19:30.698115  352848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:19:30.706422  352848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:19:30.706483  352848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:19:30.714068  352848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:19:30.722569  352848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:19:30.722640  352848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:19:30.730243  352848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:19:30.738249  352848 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:19:30.738310  352848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:19:30.745655  352848 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:19:30.791187  352848 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 07:19:30.791296  352848 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:19:30.812888  352848 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:19:30.812991  352848 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:19:30.813048  352848 kubeadm.go:319] OS: Linux
	I1210 07:19:30.813112  352848 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:19:30.813181  352848 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:19:30.813247  352848 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:19:30.813320  352848 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:19:30.813391  352848 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:19:30.813469  352848 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:19:30.813535  352848 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:19:30.813603  352848 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:19:30.813698  352848 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:19:30.888286  352848 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:19:30.888439  352848 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:19:30.888582  352848 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:19:30.893453  352848 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:19:30.904480  352848 out.go:252]   - Generating certificates and keys ...
	I1210 07:19:30.904581  352848 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:19:30.904656  352848 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:19:31.327182  352848 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:19:31.813205  352848 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:19:32.266587  352848 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:19:32.781994  352848 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:19:33.565297  352848 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:19:33.565654  352848 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-225109 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:19:34.738114  352848 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:19:34.738269  352848 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-225109 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:19:35.194017  352848 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:19:36.093746  352848 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:19:36.404593  352848 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:19:36.404896  352848 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:19:37.295742  352848 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:19:38.223848  352848 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:19:38.639942  352848 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:19:40.479547  352848 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:19:41.055231  352848 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:19:41.055329  352848 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:19:41.057628  352848 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:19:41.062056  352848 out.go:252]   - Booting up control plane ...
	I1210 07:19:41.062169  352848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:19:41.062255  352848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:19:41.062743  352848 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:19:41.079277  352848 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:19:41.079388  352848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:19:41.087091  352848 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:19:41.087192  352848 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:19:41.087231  352848 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:19:41.234120  352848 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:19:41.234242  352848 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:19:42.734940  352848 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50140895s
	I1210 07:19:42.738514  352848 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:19:42.738608  352848 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1210 07:19:42.738715  352848 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:19:42.738793  352848 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:19:47.742083  352848 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.003201988s
	I1210 07:19:48.342335  352848 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.603796154s
	I1210 07:19:50.241869  352848 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.502824989s
	I1210 07:19:50.298128  352848 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:19:50.325471  352848 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:19:50.347338  352848 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:19:50.347554  352848 kubeadm.go:319] [mark-control-plane] Marking the node bridge-225109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:19:50.364358  352848 kubeadm.go:319] [bootstrap-token] Using token: cs2f35.gkp04xv95dik6tz9
	I1210 07:19:50.367242  352848 out.go:252]   - Configuring RBAC rules ...
	I1210 07:19:50.367390  352848 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:19:50.376385  352848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:19:50.391311  352848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:19:50.398983  352848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:19:50.406507  352848 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:19:50.411258  352848 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:19:50.652965  352848 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:19:51.088336  352848 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:19:51.651675  352848 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:19:51.652961  352848 kubeadm.go:319] 
	I1210 07:19:51.653038  352848 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:19:51.653051  352848 kubeadm.go:319] 
	I1210 07:19:51.653137  352848 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:19:51.653145  352848 kubeadm.go:319] 
	I1210 07:19:51.653170  352848 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:19:51.653232  352848 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:19:51.653286  352848 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:19:51.653312  352848 kubeadm.go:319] 
	I1210 07:19:51.653369  352848 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:19:51.653377  352848 kubeadm.go:319] 
	I1210 07:19:51.653429  352848 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:19:51.653439  352848 kubeadm.go:319] 
	I1210 07:19:51.653490  352848 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:19:51.653568  352848 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:19:51.653639  352848 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:19:51.653647  352848 kubeadm.go:319] 
	I1210 07:19:51.653730  352848 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:19:51.653809  352848 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:19:51.653817  352848 kubeadm.go:319] 
	I1210 07:19:51.653901  352848 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cs2f35.gkp04xv95dik6tz9 \
	I1210 07:19:51.654006  352848 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6ad1b8c755e864c85ee916ed7811250889b7406027f35932b933bcb6208ab04c \
	I1210 07:19:51.654034  352848 kubeadm.go:319] 	--control-plane 
	I1210 07:19:51.654042  352848 kubeadm.go:319] 
	I1210 07:19:51.654127  352848 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:19:51.654134  352848 kubeadm.go:319] 
	I1210 07:19:51.654217  352848 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cs2f35.gkp04xv95dik6tz9 \
	I1210 07:19:51.654326  352848 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6ad1b8c755e864c85ee916ed7811250889b7406027f35932b933bcb6208ab04c 
	I1210 07:19:51.663230  352848 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:19:51.663459  352848 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:19:51.663564  352848 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:19:51.663580  352848 cni.go:84] Creating CNI manager for "bridge"
	I1210 07:19:51.666568  352848 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 07:19:51.669449  352848 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 07:19:51.681275  352848 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 07:19:51.705843  352848 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:19:51.705953  352848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:19:51.706012  352848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-225109 minikube.k8s.io/updated_at=2025_12_10T07_19_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=bridge-225109 minikube.k8s.io/primary=true
	I1210 07:19:51.888355  352848 ops.go:34] apiserver oom_adj: -16
	I1210 07:19:51.888495  352848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:19:52.389422  352848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:19:52.888713  352848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:19:53.388867  352848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:19:53.889085  352848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:19:54.389152  352848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:19:54.889353  352848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:19:55.388702  352848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:19:55.888621  352848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:19:55.981543  352848 kubeadm.go:1114] duration metric: took 4.275633615s to wait for elevateKubeSystemPrivileges
	I1210 07:19:55.981577  352848 kubeadm.go:403] duration metric: took 25.356808265s to StartCluster
	I1210 07:19:55.981594  352848 settings.go:142] acquiring lock: {Name:mkafea375943a88d44835a845e0d62b9b3a69986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:55.981658  352848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 07:19:55.984165  352848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/kubeconfig: {Name:mk6422e43ae26049091b4d446552471a2f8b7957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:55.984452  352848 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1210 07:19:55.984553  352848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:19:55.985004  352848 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:19:55.985175  352848 addons.go:70] Setting storage-provisioner=true in profile "bridge-225109"
	I1210 07:19:55.985190  352848 addons.go:239] Setting addon storage-provisioner=true in "bridge-225109"
	I1210 07:19:55.985302  352848 addons.go:70] Setting default-storageclass=true in profile "bridge-225109"
	I1210 07:19:55.985319  352848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-225109"
	I1210 07:19:55.985749  352848 cli_runner.go:164] Run: docker container inspect bridge-225109 --format={{.State.Status}}
	I1210 07:19:55.985972  352848 host.go:66] Checking if "bridge-225109" exists ...
	I1210 07:19:55.986466  352848 cli_runner.go:164] Run: docker container inspect bridge-225109 --format={{.State.Status}}
	I1210 07:19:55.987742  352848 config.go:182] Loaded profile config "bridge-225109": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1210 07:19:55.990264  352848 out.go:179] * Verifying Kubernetes components...
	I1210 07:19:55.994546  352848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:19:56.036214  352848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:19:56.041027  352848 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:19:56.041053  352848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:19:56.041135  352848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-225109
	I1210 07:19:56.043655  352848 addons.go:239] Setting addon default-storageclass=true in "bridge-225109"
	I1210 07:19:56.043697  352848 host.go:66] Checking if "bridge-225109" exists ...
	I1210 07:19:56.044151  352848 cli_runner.go:164] Run: docker container inspect bridge-225109 --format={{.State.Status}}
	I1210 07:19:56.073302  352848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/bridge-225109/id_rsa Username:docker}
	I1210 07:19:56.084126  352848 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:19:56.084160  352848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:19:56.084223  352848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-225109
	I1210 07:19:56.119472  352848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/bridge-225109/id_rsa Username:docker}
	I1210 07:19:56.363988  352848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:19:56.436517  352848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:19:56.520652  352848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:19:56.578985  352848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:19:57.241406  352848 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1210 07:19:57.242324  352848 node_ready.go:35] waiting up to 15m0s for node "bridge-225109" to be "Ready" ...
	I1210 07:19:57.272239  352848 node_ready.go:49] node "bridge-225109" is "Ready"
	I1210 07:19:57.272270  352848 node_ready.go:38] duration metric: took 28.27809ms for node "bridge-225109" to be "Ready" ...
	I1210 07:19:57.272283  352848 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:19:57.272339  352848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:19:57.530231  352848 api_server.go:72] duration metric: took 1.545741879s to wait for apiserver process to appear ...
	I1210 07:19:57.530252  352848 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:19:57.530271  352848 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 07:19:57.531095  352848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.010348986s)
	I1210 07:19:57.553853  352848 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 07:19:57.554987  352848 api_server.go:141] control plane version: v1.34.3
	I1210 07:19:57.555064  352848 api_server.go:131] duration metric: took 24.803516ms to wait for apiserver health ...
	I1210 07:19:57.555089  352848 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:19:57.567145  352848 system_pods.go:59] 8 kube-system pods found
	I1210 07:19:57.567220  352848 system_pods.go:61] "coredns-66bc5c9577-qtwg9" [97e91f3e-5db0-40e8-b21f-05b0f9f08088] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:19:57.567243  352848 system_pods.go:61] "coredns-66bc5c9577-sjlrt" [97ac88e3-a29f-4771-bac1-1fc752bec729] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:19:57.567268  352848 system_pods.go:61] "etcd-bridge-225109" [69cbbf74-24f4-43c1-b16e-e6df7b2c9f2c] Running
	I1210 07:19:57.567312  352848 system_pods.go:61] "kube-apiserver-bridge-225109" [5ad948cc-ed56-475a-9c08-9d0c6373f7fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:19:57.567334  352848 system_pods.go:61] "kube-controller-manager-bridge-225109" [b161f344-96a7-4de0-9423-00e94929e701] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:19:57.567355  352848 system_pods.go:61] "kube-proxy-2bjls" [4335bd5d-9122-4b33-a155-4dabe7f75e49] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:19:57.567387  352848 system_pods.go:61] "kube-scheduler-bridge-225109" [b88a365e-975b-4513-aff9-5cd661b43187] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:19:57.567411  352848 system_pods.go:61] "storage-provisioner" [355f126c-646e-4ae2-9d15-bf956c7324ef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:19:57.567433  352848 system_pods.go:74] duration metric: took 12.325458ms to wait for pod list to return data ...
	I1210 07:19:57.567455  352848 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:19:57.573032  352848 default_sa.go:45] found service account: "default"
	I1210 07:19:57.573097  352848 default_sa.go:55] duration metric: took 5.621416ms for default service account to be created ...
	I1210 07:19:57.573130  352848 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:19:57.575994  352848 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 07:19:57.577288  352848 system_pods.go:86] 8 kube-system pods found
	I1210 07:19:57.577318  352848 system_pods.go:89] "coredns-66bc5c9577-qtwg9" [97e91f3e-5db0-40e8-b21f-05b0f9f08088] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:19:57.577326  352848 system_pods.go:89] "coredns-66bc5c9577-sjlrt" [97ac88e3-a29f-4771-bac1-1fc752bec729] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:19:57.577332  352848 system_pods.go:89] "etcd-bridge-225109" [69cbbf74-24f4-43c1-b16e-e6df7b2c9f2c] Running
	I1210 07:19:57.577339  352848 system_pods.go:89] "kube-apiserver-bridge-225109" [5ad948cc-ed56-475a-9c08-9d0c6373f7fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:19:57.577346  352848 system_pods.go:89] "kube-controller-manager-bridge-225109" [b161f344-96a7-4de0-9423-00e94929e701] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:19:57.577353  352848 system_pods.go:89] "kube-proxy-2bjls" [4335bd5d-9122-4b33-a155-4dabe7f75e49] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:19:57.577360  352848 system_pods.go:89] "kube-scheduler-bridge-225109" [b88a365e-975b-4513-aff9-5cd661b43187] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:19:57.577366  352848 system_pods.go:89] "storage-provisioner" [355f126c-646e-4ae2-9d15-bf956c7324ef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:19:57.577386  352848 retry.go:31] will retry after 193.01182ms: missing components: kube-dns, kube-proxy
	I1210 07:19:57.579414  352848 addons.go:530] duration metric: took 1.59440048s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 07:19:57.746154  352848 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-225109" context rescaled to 1 replicas
	I1210 07:19:57.775686  352848 system_pods.go:86] 8 kube-system pods found
	I1210 07:19:57.775724  352848 system_pods.go:89] "coredns-66bc5c9577-qtwg9" [97e91f3e-5db0-40e8-b21f-05b0f9f08088] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:19:57.775733  352848 system_pods.go:89] "coredns-66bc5c9577-sjlrt" [97ac88e3-a29f-4771-bac1-1fc752bec729] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:19:57.775739  352848 system_pods.go:89] "etcd-bridge-225109" [69cbbf74-24f4-43c1-b16e-e6df7b2c9f2c] Running
	I1210 07:19:57.775775  352848 system_pods.go:89] "kube-apiserver-bridge-225109" [5ad948cc-ed56-475a-9c08-9d0c6373f7fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:19:57.775783  352848 system_pods.go:89] "kube-controller-manager-bridge-225109" [b161f344-96a7-4de0-9423-00e94929e701] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:19:57.775794  352848 system_pods.go:89] "kube-proxy-2bjls" [4335bd5d-9122-4b33-a155-4dabe7f75e49] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:19:57.775800  352848 system_pods.go:89] "kube-scheduler-bridge-225109" [b88a365e-975b-4513-aff9-5cd661b43187] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:19:57.775806  352848 system_pods.go:89] "storage-provisioner" [355f126c-646e-4ae2-9d15-bf956c7324ef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:19:57.775839  352848 retry.go:31] will retry after 358.424195ms: missing components: kube-dns, kube-proxy
	I1210 07:19:58.140496  352848 system_pods.go:86] 8 kube-system pods found
	I1210 07:19:58.140535  352848 system_pods.go:89] "coredns-66bc5c9577-qtwg9" [97e91f3e-5db0-40e8-b21f-05b0f9f08088] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:19:58.140545  352848 system_pods.go:89] "coredns-66bc5c9577-sjlrt" [97ac88e3-a29f-4771-bac1-1fc752bec729] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:19:58.140550  352848 system_pods.go:89] "etcd-bridge-225109" [69cbbf74-24f4-43c1-b16e-e6df7b2c9f2c] Running
	I1210 07:19:58.140570  352848 system_pods.go:89] "kube-apiserver-bridge-225109" [5ad948cc-ed56-475a-9c08-9d0c6373f7fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:19:58.140577  352848 system_pods.go:89] "kube-controller-manager-bridge-225109" [b161f344-96a7-4de0-9423-00e94929e701] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:19:58.140585  352848 system_pods.go:89] "kube-proxy-2bjls" [4335bd5d-9122-4b33-a155-4dabe7f75e49] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:19:58.140589  352848 system_pods.go:89] "kube-scheduler-bridge-225109" [b88a365e-975b-4513-aff9-5cd661b43187] Running
	I1210 07:19:58.140595  352848 system_pods.go:89] "storage-provisioner" [355f126c-646e-4ae2-9d15-bf956c7324ef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:19:58.140610  352848 retry.go:31] will retry after 471.15445ms: missing components: kube-dns, kube-proxy
	I1210 07:19:58.615480  352848 system_pods.go:86] 8 kube-system pods found
	I1210 07:19:58.615516  352848 system_pods.go:89] "coredns-66bc5c9577-qtwg9" [97e91f3e-5db0-40e8-b21f-05b0f9f08088] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:19:58.615526  352848 system_pods.go:89] "coredns-66bc5c9577-sjlrt" [97ac88e3-a29f-4771-bac1-1fc752bec729] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:19:58.615532  352848 system_pods.go:89] "etcd-bridge-225109" [69cbbf74-24f4-43c1-b16e-e6df7b2c9f2c] Running
	I1210 07:19:58.615539  352848 system_pods.go:89] "kube-apiserver-bridge-225109" [5ad948cc-ed56-475a-9c08-9d0c6373f7fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 07:19:58.615548  352848 system_pods.go:89] "kube-controller-manager-bridge-225109" [b161f344-96a7-4de0-9423-00e94929e701] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 07:19:58.615552  352848 system_pods.go:89] "kube-proxy-2bjls" [4335bd5d-9122-4b33-a155-4dabe7f75e49] Running
	I1210 07:19:58.615558  352848 system_pods.go:89] "kube-scheduler-bridge-225109" [b88a365e-975b-4513-aff9-5cd661b43187] Running
	I1210 07:19:58.615562  352848 system_pods.go:89] "storage-provisioner" [355f126c-646e-4ae2-9d15-bf956c7324ef] Running
	I1210 07:19:58.615571  352848 system_pods.go:126] duration metric: took 1.042422006s to wait for k8s-apps to be running ...
	I1210 07:19:58.615580  352848 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:19:58.615640  352848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:19:58.629805  352848 system_svc.go:56] duration metric: took 14.216847ms WaitForService to wait for kubelet
	I1210 07:19:58.629833  352848 kubeadm.go:587] duration metric: took 2.645348387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:19:58.629852  352848 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:19:58.632601  352848 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 07:19:58.632635  352848 node_conditions.go:123] node cpu capacity is 2
	I1210 07:19:58.632650  352848 node_conditions.go:105] duration metric: took 2.792722ms to run NodePressure ...
	I1210 07:19:58.632688  352848 start.go:242] waiting for startup goroutines ...
	I1210 07:19:58.632696  352848 start.go:247] waiting for cluster config update ...
	I1210 07:19:58.632710  352848 start.go:256] writing updated cluster config ...
	I1210 07:19:58.633001  352848 ssh_runner.go:195] Run: rm -f paused
	I1210 07:19:58.636945  352848 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:19:58.640498  352848 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qtwg9" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:20:00.646713  352848 pod_ready.go:104] pod "coredns-66bc5c9577-qtwg9" is not "Ready", error: <nil>
	W1210 07:20:03.146242  352848 pod_ready.go:104] pod "coredns-66bc5c9577-qtwg9" is not "Ready", error: <nil>
	W1210 07:20:05.646518  352848 pod_ready.go:104] pod "coredns-66bc5c9577-qtwg9" is not "Ready", error: <nil>
	W1210 07:20:07.647215  352848 pod_ready.go:104] pod "coredns-66bc5c9577-qtwg9" is not "Ready", error: <nil>
	I1210 07:20:08.643711  352848 pod_ready.go:99] pod "coredns-66bc5c9577-qtwg9" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-qtwg9" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-qtwg9" not found
	I1210 07:20:08.643739  352848 pod_ready.go:86] duration metric: took 10.003213045s for pod "coredns-66bc5c9577-qtwg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:20:08.643749  352848 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sjlrt" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:20:10.651778  352848 pod_ready.go:104] pod "coredns-66bc5c9577-sjlrt" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777113414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777127404Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777160594Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777174535Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777184742Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777195950Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777205197Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777215487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777231528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777260304Z" level=info msg="Connect containerd service"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.777515527Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.778069290Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.789502105Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.789748787Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.789677541Z" level=info msg="Start subscribing containerd event"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.795087082Z" level=info msg="Start recovering state"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.809745847Z" level=info msg="Start event monitor"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.809929530Z" level=info msg="Start cni network conf syncer for default"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810001120Z" level=info msg="Start streaming server"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810060181Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810114328Z" level=info msg="runtime interface starting up..."
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810165307Z" level=info msg="starting plugins..."
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.810240475Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:00:37 no-preload-320236 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 10 07:00:37 no-preload-320236 containerd[556]: time="2025-12-10T07:00:37.811841962Z" level=info msg="containerd successfully booted in 0.055335s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:20:11.671864   10251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:20:11.672733   10251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:20:11.674483   10251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:20:11.675311   10251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:20:11.676976   10251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:20:11 up  2:02,  0 user,  load average: 2.41, 1.82, 1.47
	Linux no-preload-320236 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:20:08 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:20:08 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1559.
	Dec 10 07:20:08 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:08 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:08 no-preload-320236 kubelet[10115]: E1210 07:20:08.959450   10115 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:20:08 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:20:08 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:20:09 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1560.
	Dec 10 07:20:09 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:09 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:09 no-preload-320236 kubelet[10120]: E1210 07:20:09.704805   10120 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:20:09 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:20:09 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:20:10 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1561.
	Dec 10 07:20:10 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:10 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:10 no-preload-320236 kubelet[10125]: E1210 07:20:10.484534   10125 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:20:10 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:20:10 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:20:11 no-preload-320236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1562.
	Dec 10 07:20:11 no-preload-320236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:11 no-preload-320236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:11 no-preload-320236 kubelet[10160]: E1210 07:20:11.264807   10160 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:20:11 no-preload-320236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:20:11 no-preload-320236 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-320236 -n no-preload-320236: exit status 2 (345.925729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-320236" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (267.93s)
E1210 07:22:21.383149    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:22:23.163381    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (347/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 10.63
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.33
9 TestDownloadOnly/v1.28.0/DeleteAll 0.31
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.2
12 TestDownloadOnly/v1.34.3/json-events 4.04
14 TestDownloadOnly/v1.34.3/cached-images 0.45
15 TestDownloadOnly/v1.34.3/binaries 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.08
18 TestDownloadOnly/v1.34.3/DeleteAll 0.21
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-rc.1/json-events 2.92
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0.48
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 1.11
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 145.4
38 TestAddons/serial/Volcano 40.72
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/serial/GCPAuth/FakeCredentials 8.92
44 TestAddons/parallel/Registry 15.05
45 TestAddons/parallel/RegistryCreds 0.75
46 TestAddons/parallel/Ingress 17.84
47 TestAddons/parallel/InspektorGadget 11.84
48 TestAddons/parallel/MetricsServer 6.84
50 TestAddons/parallel/CSI 52.38
51 TestAddons/parallel/Headlamp 15.98
52 TestAddons/parallel/CloudSpanner 5.65
53 TestAddons/parallel/LocalPath 52.75
54 TestAddons/parallel/NvidiaDevicePlugin 5.66
55 TestAddons/parallel/Yakd 11.91
57 TestAddons/StoppedEnableDisable 12.37
58 TestCertOptions 41.65
59 TestCertExpiration 229
61 TestForceSystemdFlag 44.31
62 TestForceSystemdEnv 41.28
63 TestDockerEnvContainerd 53.87
67 TestErrorSpam/setup 36.85
68 TestErrorSpam/start 0.78
69 TestErrorSpam/status 1.15
70 TestErrorSpam/pause 1.73
71 TestErrorSpam/unpause 1.79
72 TestErrorSpam/stop 1.59
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 59.18
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.9
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
84 TestFunctional/serial/CacheCmd/cache/add_local 1.35
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 44.56
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.49
95 TestFunctional/serial/LogsFileCmd 1.48
96 TestFunctional/serial/InvalidService 4.25
98 TestFunctional/parallel/ConfigCmd 0.44
99 TestFunctional/parallel/DashboardCmd 7.8
100 TestFunctional/parallel/DryRun 0.43
101 TestFunctional/parallel/InternationalLanguage 0.21
102 TestFunctional/parallel/StatusCmd 1.18
106 TestFunctional/parallel/ServiceCmdConnect 7.59
107 TestFunctional/parallel/AddonsCmd 0.13
108 TestFunctional/parallel/PersistentVolumeClaim 19.87
110 TestFunctional/parallel/SSHCmd 0.73
111 TestFunctional/parallel/CpCmd 2.48
113 TestFunctional/parallel/FileSync 0.4
114 TestFunctional/parallel/CertSync 2.27
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
122 TestFunctional/parallel/License 0.29
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
136 TestFunctional/parallel/ProfileCmd/profile_list 0.49
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
138 TestFunctional/parallel/MountCmd/any-port 8.28
139 TestFunctional/parallel/ServiceCmd/List 0.63
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
142 TestFunctional/parallel/ServiceCmd/Format 0.39
143 TestFunctional/parallel/ServiceCmd/URL 0.4
144 TestFunctional/parallel/MountCmd/specific-port 1.49
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.7
146 TestFunctional/parallel/Version/short 0.08
147 TestFunctional/parallel/Version/components 1.32
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.03
153 TestFunctional/parallel/ImageCommands/Setup 0.68
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
157 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
158 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.21
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.44
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.35
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.09
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.84
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.11
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 0.93
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.47
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.47
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.25
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.13
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.72
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 2.16
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.67
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.58
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.25
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.39
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.39
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.38
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.8
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.88
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.05
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.5
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.22
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.24
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.25
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.77
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.26
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.13
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 1.06
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.32
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.37
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.46
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.67
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.39
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.16
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.17
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 161.12
265 TestMultiControlPlane/serial/DeployApp 7.81
266 TestMultiControlPlane/serial/PingHostFromPods 1.58
267 TestMultiControlPlane/serial/AddWorkerNode 32.77
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.12
270 TestMultiControlPlane/serial/CopyFile 20.22
271 TestMultiControlPlane/serial/StopSecondaryNode 12.9
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
273 TestMultiControlPlane/serial/RestartSecondaryNode 15.21
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.32
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.43
276 TestMultiControlPlane/serial/DeleteSecondaryNode 10.7
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
278 TestMultiControlPlane/serial/StopCluster 36.71
279 TestMultiControlPlane/serial/RestartCluster 62.59
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
281 TestMultiControlPlane/serial/AddSecondaryNode 61.92
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
287 TestJSONOutput/start/Command 58.56
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.72
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.65
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 6.01
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 40.37
313 TestKicCustomNetwork/use_default_bridge_network 37.79
314 TestKicExistingNetwork 43.55
315 TestKicCustomSubnet 40.04
316 TestKicStaticIP 41.86
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 88.19
321 TestMountStart/serial/StartWithMountFirst 8.42
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.46
324 TestMountStart/serial/VerifyMountSecond 0.31
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 7.45
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 86.61
333 TestMultiNode/serial/DeployApp2Nodes 5.55
334 TestMultiNode/serial/PingHostFrom2Pods 0.99
335 TestMultiNode/serial/AddNode 29.25
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.73
338 TestMultiNode/serial/CopyFile 10.52
339 TestMultiNode/serial/StopNode 2.4
340 TestMultiNode/serial/StartAfterStop 8.11
341 TestMultiNode/serial/RestartKeepsNodes 72.65
342 TestMultiNode/serial/DeleteNode 5.55
343 TestMultiNode/serial/StopMultiNode 24.09
344 TestMultiNode/serial/RestartMultiNode 50.6
345 TestMultiNode/serial/ValidateNameConflict 42.17
350 TestPreload 121.8
352 TestScheduledStopUnix 114.02
355 TestInsufficientStorage 8.96
356 TestRunningBinaryUpgrade 62.35
359 TestMissingContainerUpgrade 130.44
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 53.24
363 TestNoKubernetes/serial/StartWithStopK8s 23.91
364 TestNoKubernetes/serial/Start 7.51
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
367 TestNoKubernetes/serial/ProfileList 0.91
368 TestNoKubernetes/serial/Stop 2.31
369 TestNoKubernetes/serial/StartNoArgs 7.3
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
371 TestStoppedBinaryUpgrade/Setup 1.36
372 TestStoppedBinaryUpgrade/Upgrade 58.41
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.65
382 TestPause/serial/Start 59.59
383 TestPause/serial/SecondStartNoReconfiguration 7.54
384 TestPause/serial/Pause 0.71
385 TestPause/serial/VerifyStatus 0.33
386 TestPause/serial/Unpause 0.65
387 TestPause/serial/PauseAgain 0.88
388 TestPause/serial/DeletePaused 3.06
389 TestPause/serial/VerifyDeletedResources 0.45
397 TestNetworkPlugins/group/false 3.77
402 TestStartStop/group/old-k8s-version/serial/FirstStart 61.46
403 TestStartStop/group/old-k8s-version/serial/DeployApp 8.38
404 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
405 TestStartStop/group/old-k8s-version/serial/Stop 12.09
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
407 TestStartStop/group/old-k8s-version/serial/SecondStart 50.44
408 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
409 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
410 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
411 TestStartStop/group/old-k8s-version/serial/Pause 3.03
415 TestStartStop/group/embed-certs/serial/FirstStart 57.23
416 TestStartStop/group/embed-certs/serial/DeployApp 8.35
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.6
418 TestStartStop/group/embed-certs/serial/Stop 12.09
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
420 TestStartStop/group/embed-certs/serial/SecondStart 49.32
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.72
424 TestStartStop/group/embed-certs/serial/Pause 2.97
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 58.14
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.37
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.91
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.71
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.08
440 TestStartStop/group/no-preload/serial/Stop 1.33
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
443 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/Stop 1.31
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.72
453 TestNetworkPlugins/group/auto/Start 59.37
454 TestNetworkPlugins/group/auto/KubeletFlags 0.3
455 TestNetworkPlugins/group/auto/NetCatPod 10.26
456 TestNetworkPlugins/group/auto/DNS 0.18
457 TestNetworkPlugins/group/auto/Localhost 0.14
458 TestNetworkPlugins/group/auto/HairPin 0.15
459 TestNetworkPlugins/group/kindnet/Start 59.94
460 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
461 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
462 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
463 TestNetworkPlugins/group/kindnet/DNS 0.17
464 TestNetworkPlugins/group/kindnet/Localhost 0.14
465 TestNetworkPlugins/group/kindnet/HairPin 0.15
466 TestNetworkPlugins/group/flannel/Start 62.96
468 TestNetworkPlugins/group/flannel/ControllerPod 6.01
469 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
470 TestNetworkPlugins/group/flannel/NetCatPod 10.25
471 TestNetworkPlugins/group/flannel/DNS 0.18
472 TestNetworkPlugins/group/flannel/Localhost 0.16
473 TestNetworkPlugins/group/flannel/HairPin 0.15
474 TestNetworkPlugins/group/enable-default-cni/Start 79.97
475 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
476 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
477 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
478 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
479 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
480 TestNetworkPlugins/group/bridge/Start 87.31
481 TestNetworkPlugins/group/calico/Start 81.58
482 TestNetworkPlugins/group/bridge/KubeletFlags 0.46
483 TestNetworkPlugins/group/bridge/NetCatPod 11.47
484 TestNetworkPlugins/group/bridge/DNS 0.25
485 TestNetworkPlugins/group/bridge/Localhost 0.2
486 TestNetworkPlugins/group/bridge/HairPin 0.29
487 TestNetworkPlugins/group/custom-flannel/Start 69.45
488 TestNetworkPlugins/group/calico/ControllerPod 6.01
489 TestNetworkPlugins/group/calico/KubeletFlags 0.42
490 TestNetworkPlugins/group/calico/NetCatPod 11.39
491 TestNetworkPlugins/group/calico/DNS 0.31
492 TestNetworkPlugins/group/calico/Localhost 0.28
493 TestNetworkPlugins/group/calico/HairPin 0.21
494 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
495 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.28
496 TestNetworkPlugins/group/custom-flannel/DNS 0.17
497 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
498 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (10.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-343562 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-343562 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.6311277s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (10.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 05:29:06.530410    4116 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1210 05:29:06.530492    4116 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-343562
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-343562: exit status 85 (333.531446ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-343562 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-343562 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:55.966972    4121 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:28:55.967160    4121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:55.967171    4121 out.go:374] Setting ErrFile to fd 2...
	I1210 05:28:55.967177    4121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:55.967480    4121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	W1210 05:28:55.967627    4121 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22094-2307/.minikube/config/config.json: open /home/jenkins/minikube-integration/22094-2307/.minikube/config/config.json: no such file or directory
	I1210 05:28:55.968071    4121 out.go:368] Setting JSON to true
	I1210 05:28:55.968866    4121 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":686,"bootTime":1765343850,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:28:55.968937    4121 start.go:143] virtualization:  
	I1210 05:28:55.974914    4121 out.go:99] [download-only-343562] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1210 05:28:55.975105    4121 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22094-2307/.minikube/cache/preloaded-tarball: no such file or directory
	I1210 05:28:55.975220    4121 notify.go:221] Checking for updates...
	I1210 05:28:55.979696    4121 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:28:55.983592    4121 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:28:55.986947    4121 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:28:55.990282    4121 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:28:55.993967    4121 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 05:28:56.000641    4121 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:28:56.001036    4121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:28:56.036514    4121 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:28:56.036629    4121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:56.451082    4121 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-10 05:28:56.441867015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:28:56.451184    4121 docker.go:319] overlay module found
	I1210 05:28:56.454513    4121 out.go:99] Using the docker driver based on user configuration
	I1210 05:28:56.454563    4121 start.go:309] selected driver: docker
	I1210 05:28:56.454573    4121 start.go:927] validating driver "docker" against <nil>
	I1210 05:28:56.454682    4121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:56.516786    4121 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-10 05:28:56.508270325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:28:56.516948    4121 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:28:56.517239    4121 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 05:28:56.517415    4121 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:28:56.520658    4121 out.go:171] Using Docker driver with root privileges
	I1210 05:28:56.523558    4121 cni.go:84] Creating CNI manager for ""
	I1210 05:28:56.523624    4121 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1210 05:28:56.523640    4121 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 05:28:56.523712    4121 start.go:353] cluster config:
	{Name:download-only-343562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-343562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:28:56.526643    4121 out.go:99] Starting "download-only-343562" primary control-plane node in "download-only-343562" cluster
	I1210 05:28:56.526664    4121 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1210 05:28:56.529603    4121 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:28:56.529641    4121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1210 05:28:56.529793    4121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:28:56.545387    4121 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 05:28:56.545577    4121 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 05:28:56.545684    4121 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 05:28:56.586864    4121 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1210 05:28:56.586899    4121 cache.go:65] Caching tarball of preloaded images
	I1210 05:28:56.587087    4121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1210 05:28:56.590435    4121 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1210 05:28:56.590458    4121 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1210 05:28:56.678844    4121 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1210 05:28:56.678990    4121 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1210 05:29:03.387232    4121 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1210 05:29:03.387723    4121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/download-only-343562/config.json ...
	I1210 05:29:03.387763    4121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/download-only-343562/config.json: {Name:mk52ce21c8fcd2e4b56adaf3516872d73e1e07f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:03.387969    4121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1210 05:29:03.388211    4121 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-343562 host does not exist
	  To start a cluster, run: "minikube start -p download-only-343562"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-343562
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (4.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-484675 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-484675 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.044688417s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (4.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
I1210 05:29:11.673617    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
I1210 05:29:11.822798    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
I1210 05:29:11.969752    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
--- PASS: TestDownloadOnly/v1.34.3/cached-images (0.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
--- PASS: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-484675
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-484675: exit status 85 (79.371933ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-343562 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-343562 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ delete  │ -p download-only-343562                                                                                                                                                               │ download-only-343562 │ jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ start   │ -o=json --download-only -p download-only-484675 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-484675 │ jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:29:07
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:29:07.418278    4321 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:29:07.418474    4321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:29:07.418501    4321 out.go:374] Setting ErrFile to fd 2...
	I1210 05:29:07.418519    4321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:29:07.418941    4321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:29:07.419545    4321 out.go:368] Setting JSON to true
	I1210 05:29:07.420419    4321 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":698,"bootTime":1765343850,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:29:07.420541    4321 start.go:143] virtualization:  
	I1210 05:29:07.447969    4321 out.go:99] [download-only-484675] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:29:07.448261    4321 notify.go:221] Checking for updates...
	I1210 05:29:07.467869    4321 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:29:07.500836    4321 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:29:07.517142    4321 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:29:07.521300    4321 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:29:07.525165    4321 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 05:29:07.532316    4321 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:29:07.532647    4321 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:29:07.552458    4321 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:29:07.552579    4321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:29:07.621072    4321 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 05:29:07.611813118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:29:07.621176    4321 docker.go:319] overlay module found
	I1210 05:29:07.624635    4321 out.go:99] Using the docker driver based on user configuration
	I1210 05:29:07.624672    4321 start.go:309] selected driver: docker
	I1210 05:29:07.624679    4321 start.go:927] validating driver "docker" against <nil>
	I1210 05:29:07.624787    4321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:29:07.679892    4321 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 05:29:07.671499074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:29:07.680039    4321 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:29:07.680304    4321 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 05:29:07.680454    4321 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:29:07.684014    4321 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-484675 host does not exist
	  To start a cluster, run: "minikube start -p download-only-484675"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-484675
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (2.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-942365 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-942365 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (2.917458503s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (2.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
I1210 05:29:15.527094    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
I1210 05:29:15.709432    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
I1210 05:29:15.862801    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
--- PASS: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
--- PASS: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-942365
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-942365: exit status 85 (83.7412ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                            ARGS                                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-343562 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd      │ download-only-343562 │ jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ delete  │ -p download-only-343562                                                                                                                                                                    │ download-only-343562 │ jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ start   │ -o=json --download-only -p download-only-484675 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd      │ download-only-484675 │ jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ delete  │ -p download-only-484675                                                                                                                                                                    │ download-only-484675 │ jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ start   │ -o=json --download-only -p download-only-942365 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-942365 │ jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:29:12
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:29:12.594566    4547 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:29:12.594753    4547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:29:12.594780    4547 out.go:374] Setting ErrFile to fd 2...
	I1210 05:29:12.594798    4547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:29:12.595181    4547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:29:12.596127    4547 out.go:368] Setting JSON to true
	I1210 05:29:12.596807    4547 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":703,"bootTime":1765343850,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:29:12.596871    4547 start.go:143] virtualization:  
	I1210 05:29:12.600266    4547 out.go:99] [download-only-942365] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:29:12.600501    4547 notify.go:221] Checking for updates...
	I1210 05:29:12.603423    4547 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:29:12.606373    4547 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:29:12.609315    4547 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:29:12.612622    4547 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:29:12.615495    4547 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 05:29:12.621060    4547 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:29:12.621302    4547 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:29:12.643416    4547 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:29:12.643521    4547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:29:12.701511    4547 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 05:29:12.692969256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:29:12.701611    4547 docker.go:319] overlay module found
	I1210 05:29:12.704693    4547 out.go:99] Using the docker driver based on user configuration
	I1210 05:29:12.704730    4547 start.go:309] selected driver: docker
	I1210 05:29:12.704738    4547 start.go:927] validating driver "docker" against <nil>
	I1210 05:29:12.704846    4547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:29:12.764677    4547 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 05:29:12.752485221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:29:12.764830    4547 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:29:12.765090    4547 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 05:29:12.765251    4547 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:29:12.768300    4547 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-942365 host does not exist
	  To start a cluster, run: "minikube start -p download-only-942365"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-942365
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (1.11s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 05:29:17.825012    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-633325 --alsologtostderr --binary-mirror http://127.0.0.1:36435 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-633325" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-633325
--- PASS: TestBinaryMirror (1.11s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-173024
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-173024: exit status 85 (63.849924ms)

                                                
                                                
-- stdout --
	* Profile "addons-173024" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-173024"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-173024
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-173024: exit status 85 (79.215343ms)

                                                
                                                
-- stdout --
	* Profile "addons-173024" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-173024"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (145.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-173024 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-173024 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m25.397453469s)
--- PASS: TestAddons/Setup (145.40s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.72s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:878: volcano-admission stabilized in 53.242785ms
addons_test.go:870: volcano-scheduler stabilized in 53.283917ms
addons_test.go:886: volcano-controller stabilized in 53.967589ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-qtfw4" [51b937e0-20b9-436d-a2df-72be75861c7c] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004344541s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-xmg99" [1394adc8-0ceb-4bec-af08-8455d0cb630c] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003420233s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-bf6d5" [2cb1692f-d54b-4aa4-810d-9c0ea2f07cde] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003236489s
addons_test.go:905: (dbg) Run:  kubectl --context addons-173024 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-173024 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-173024 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [f4d29aae-b168-455d-9caa-9091b58f7e3c] Pending
helpers_test.go:353: "test-job-nginx-0" [f4d29aae-b168-455d-9caa-9091b58f7e3c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [f4d29aae-b168-455d-9caa-9091b58f7e3c] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004162304s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-173024 addons disable volcano --alsologtostderr -v=1: (12.021802665s)
--- PASS: TestAddons/serial/Volcano (40.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-173024 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-173024 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-173024 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-173024 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [10284960-ad9e-4328-8f3a-2459f923ecdc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [10284960-ad9e-4328-8f3a-2459f923ecdc] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003660708s
addons_test.go:696: (dbg) Run:  kubectl --context addons-173024 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-173024 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-173024 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-173024 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 6.484211ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-m86rx" [f501690c-12f0-41df-be1e-ba6e9d2807fe] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003421852s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-v6z8k" [0a55ac7f-c3c7-4534-99db-bc90d4de9ed2] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003757303s
addons_test.go:394: (dbg) Run:  kubectl --context addons-173024 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-173024 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-173024 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.997965397s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 ip
2025/12/10 05:32:57 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.05s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.490603ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-173024
addons_test.go:334: (dbg) Run:  kubectl --context addons-173024 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-173024 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-173024 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-173024 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [3320ea99-d187-403d-8c81-89e77d0b58d7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [3320ea99-d187-403d-8c81-89e77d0b58d7] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003580233s
I1210 05:34:17.640927    4116 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-173024 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-173024 addons disable ingress-dns --alsologtostderr -v=1: (1.38111216s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-173024 addons disable ingress --alsologtostderr -v=1: (7.802545016s)
--- PASS: TestAddons/parallel/Ingress (17.84s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-7twzj" [1c11ff12-a80e-4657-92bc-6e592ddde104] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003972938s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-173024 addons disable inspektor-gadget --alsologtostderr -v=1: (5.836891384s)
--- PASS: TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.889596ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-qjf9c" [2866dc78-d9d6-416a-9973-24e5ad8b75b0] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003264238s
addons_test.go:465: (dbg) Run:  kubectl --context addons-173024 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 05:33:22.761538    4116 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 05:33:22.765535    4116 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 05:33:22.765567    4116 kapi.go:107] duration metric: took 9.162485ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 9.173686ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-173024 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-173024 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [dc66f98c-e6d3-47ab-8445-6270ab647022] Pending
helpers_test.go:353: "task-pv-pod" [dc66f98c-e6d3-47ab-8445-6270ab647022] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [dc66f98c-e6d3-47ab-8445-6270ab647022] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.007505936s
addons_test.go:574: (dbg) Run:  kubectl --context addons-173024 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-173024 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-173024 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-173024 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-173024 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-173024 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-173024 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [be216451-6506-42c0-af7f-20df720d6ca0] Pending
helpers_test.go:353: "task-pv-pod-restore" [be216451-6506-42c0-af7f-20df720d6ca0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [be216451-6506-42c0-af7f-20df720d6ca0] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002802384s
addons_test.go:616: (dbg) Run:  kubectl --context addons-173024 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-173024 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-173024 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-173024 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.913592506s)
--- PASS: TestAddons/parallel/CSI (52.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-173024 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-173024 --alsologtostderr -v=1: (1.168807827s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-ldg5j" [c9329530-1022-4764-b2b3-03e5e906c42d] Pending
helpers_test.go:353: "headlamp-dfcdc64b-ldg5j" [c9329530-1022-4764-b2b3-03e5e906c42d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-ldg5j" [c9329530-1022-4764-b2b3-03e5e906c42d] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-ldg5j" [c9329530-1022-4764-b2b3-03e5e906c42d] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003877223s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-173024 addons disable headlamp --alsologtostderr -v=1: (5.807943063s)
--- PASS: TestAddons/parallel/Headlamp (15.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-ttztr" [a5739d17-b130-4765-955c-171f80f7f134] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003188327s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-173024 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-173024 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-173024 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [80077c9d-1ad5-4056-ac1d-468ae00fe66c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [80077c9d-1ad5-4056-ac1d-468ae00fe66c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [80077c9d-1ad5-4056-ac1d-468ae00fe66c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003159302s
addons_test.go:969: (dbg) Run:  kubectl --context addons-173024 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 ssh "cat /opt/local-path-provisioner/pvc-1b63b8b7-e471-4c3d-978f-af265ef9dd94_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-173024 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-173024 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-173024 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.175471822s)
--- PASS: TestAddons/parallel/LocalPath (52.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-gzdhl" [655fd212-e4ec-485d-b2df-010f019fbee2] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004236138s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-mqh2g" [00c630bf-edd4-4422-bffb-d6c4308cc5b1] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003768013s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-173024 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-173024 addons disable yakd --alsologtostderr -v=1: (5.907369381s)
--- PASS: TestAddons/parallel/Yakd (11.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-173024
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-173024: (12.068490144s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-173024
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-173024
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-173024
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (41.65s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-646610 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-646610 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (38.860312517s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-646610 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-646610 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-646610 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-646610" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-646610
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-646610: (2.06791316s)
--- PASS: TestCertOptions (41.65s)

                                                
                                    
x
+
TestCertExpiration (229s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-734005 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1210 06:43:37.012775    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-734005 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.045486953s)
E1210 06:44:41.944947    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:46:38.876285    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:46:44.570811    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-734005 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-734005 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.505721629s)
helpers_test.go:176: Cleaning up "cert-expiration-734005" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-734005
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-734005: (2.444711513s)
--- PASS: TestCertExpiration (229.00s)

                                                
                                    
x
+
TestForceSystemdFlag (44.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-868870 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1210 06:41:38.876392    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:41:44.571488    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-868870 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.868333176s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-868870 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-868870" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-868870
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-868870: (2.129676978s)
--- PASS: TestForceSystemdFlag (44.31s)

                                                
                                    
x
+
TestForceSystemdEnv (41.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-099835 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-099835 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.901942734s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-099835 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-099835" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-099835
E1210 06:43:07.644462    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-099835: (2.07481127s)
--- PASS: TestForceSystemdEnv (41.28s)

                                                
                                    
x
+
TestDockerEnvContainerd (53.87s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-643292 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-643292 --driver=docker  --container-runtime=containerd: (38.354702198s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-643292"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-643292": (1.106076969s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Nps6zBRVqgO2/agent.25662" SSH_AGENT_PID="25663" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Nps6zBRVqgO2/agent.25662" SSH_AGENT_PID="25663" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Nps6zBRVqgO2/agent.25662" SSH_AGENT_PID="25663" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.209123647s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Nps6zBRVqgO2/agent.25662" SSH_AGENT_PID="25663" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-643292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-643292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-643292: (2.115906602s)
--- PASS: TestDockerEnvContainerd (53.87s)

                                                
                                    
x
+
TestErrorSpam/setup (36.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-725106 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-725106 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-725106 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-725106 --driver=docker  --container-runtime=containerd: (36.849871029s)
--- PASS: TestErrorSpam/setup (36.85s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 stop: (1.389210012s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-725106 --log_dir /tmp/nospam-725106 stop
--- PASS: TestErrorSpam/stop (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-944360 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1210 05:36:44.571884    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:44.578287    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:44.589680    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:44.611065    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:44.652441    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:44.733848    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:44.895373    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:45.217039    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:45.858662    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:47.140082    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:49.701398    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:36:54.823570    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:37:05.065589    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:37:25.547006    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-944360 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (59.180248371s)
--- PASS: TestFunctional/serial/StartWithProxy (59.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.9s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1210 05:37:27.745266    4116 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-944360 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-944360 --alsologtostderr -v=8: (7.899886177s)
functional_test.go:678: soft start took 7.902218098s for "functional-944360" cluster.
I1210 05:37:35.645496    4116 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (7.90s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-944360 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-944360 cache add registry.k8s.io/pause:3.1: (1.173250042s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-944360 cache add registry.k8s.io/pause:3.3: (1.190641653s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-944360 cache add registry.k8s.io/pause:latest: (1.072580327s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-944360 /tmp/TestFunctionalserialCacheCmdcacheadd_local20787569/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 cache add minikube-local-cache-test:functional-944360
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 cache delete minikube-local-cache-test:functional-944360
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-944360
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-944360 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.007356ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 kubectl -- --context functional-944360 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-944360 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-944360 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 05:38:06.509031    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-944360 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.564544507s)
functional_test.go:776: restart took 44.564657296s for "functional-944360" cluster.
I1210 05:38:27.861524    4116 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (44.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-944360 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-944360 logs: (1.491929753s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 logs --file /tmp/TestFunctionalserialLogsFileCmd604148500/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-944360 logs --file /tmp/TestFunctionalserialLogsFileCmd604148500/001/logs.txt: (1.476480056s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-944360 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-944360
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-944360: exit status 115 (449.582574ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32343 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-944360 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-944360 config get cpus: exit status 14 (71.882135ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-944360 config get cpus: exit status 14 (64.274267ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-944360 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-944360 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 42167: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.80s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-944360 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-944360 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (187.556285ms)

                                                
                                                
-- stdout --
	* [functional-944360] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:39:04.401676   41781 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:39:04.401909   41781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:39:04.401941   41781 out.go:374] Setting ErrFile to fd 2...
	I1210 05:39:04.401962   41781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:39:04.402224   41781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:39:04.402623   41781 out.go:368] Setting JSON to false
	I1210 05:39:04.403644   41781 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1295,"bootTime":1765343850,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:39:04.403742   41781 start.go:143] virtualization:  
	I1210 05:39:04.406945   41781 out.go:179] * [functional-944360] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 05:39:04.410699   41781 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:39:04.410797   41781 notify.go:221] Checking for updates...
	I1210 05:39:04.416346   41781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:39:04.419181   41781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:39:04.422051   41781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:39:04.424943   41781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:39:04.427796   41781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:39:04.431212   41781 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1210 05:39:04.431849   41781 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:39:04.461214   41781 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:39:04.461344   41781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:39:04.520104   41781 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 05:39:04.509284129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:39:04.520357   41781 docker.go:319] overlay module found
	I1210 05:39:04.523423   41781 out.go:179] * Using the docker driver based on existing profile
	I1210 05:39:04.526223   41781 start.go:309] selected driver: docker
	I1210 05:39:04.526249   41781 start.go:927] validating driver "docker" against &{Name:functional-944360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-944360 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:39:04.526357   41781 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:39:04.529999   41781 out.go:203] 
	W1210 05:39:04.532872   41781 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 05:39:04.535684   41781 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-944360 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-944360 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-944360 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (210.307494ms)

                                                
                                                
-- stdout --
	* [functional-944360] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:39:04.199441   41735 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:39:04.199637   41735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:39:04.199666   41735 out.go:374] Setting ErrFile to fd 2...
	I1210 05:39:04.199686   41735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:39:04.200661   41735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 05:39:04.201108   41735 out.go:368] Setting JSON to false
	I1210 05:39:04.202070   41735 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1295,"bootTime":1765343850,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 05:39:04.202170   41735 start.go:143] virtualization:  
	I1210 05:39:04.205663   41735 out.go:179] * [functional-944360] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1210 05:39:04.209485   41735 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:39:04.209607   41735 notify.go:221] Checking for updates...
	I1210 05:39:04.215215   41735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:39:04.218176   41735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 05:39:04.221152   41735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 05:39:04.224048   41735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 05:39:04.226765   41735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:39:04.230094   41735 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1210 05:39:04.230711   41735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:39:04.257113   41735 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 05:39:04.257240   41735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:39:04.331076   41735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 05:39:04.321325142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 05:39:04.331180   41735 docker.go:319] overlay module found
	I1210 05:39:04.336116   41735 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 05:39:04.339197   41735 start.go:309] selected driver: docker
	I1210 05:39:04.339212   41735 start.go:927] validating driver "docker" against &{Name:functional-944360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-944360 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:39:04.339314   41735 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:39:04.342822   41735 out.go:203] 
	W1210 05:39:04.345656   41735 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 05:39:04.348426   41735 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-944360 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-944360 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-wzwkq" [652946e6-89cc-4bf1-a256-15716624f8e3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-wzwkq" [652946e6-89cc-4bf1-a256-15716624f8e3] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003887565s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31466
functional_test.go:1680: http://192.168.49.2:31466: success! body:
Request served by hello-node-connect-7d85dfc575-wzwkq

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31466
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [885ab385-fa04-4f0c-8c67-9f56e1c4b80e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003530122s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-944360 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-944360 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-944360 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-944360 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f8f88a53-2040-4a31-adcb-dc3d447f3641] Pending
helpers_test.go:353: "sp-pod" [f8f88a53-2040-4a31-adcb-dc3d447f3641] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003280218s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-944360 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-944360 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-944360 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [bf8d4dfd-8e35-4d57-8e6e-e8d91e357551] Pending
helpers_test.go:353: "sp-pod" [bf8d4dfd-8e35-4d57-8e6e-e8d91e357551] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003827942s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-944360 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh -n functional-944360 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 cp functional-944360:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3548882707/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh -n functional-944360 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh -n functional-944360 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4116/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "sudo cat /etc/test/nested/copy/4116/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4116.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "sudo cat /etc/ssl/certs/4116.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4116.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "sudo cat /usr/share/ca-certificates/4116.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "sudo cat /etc/ssl/certs/41162.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "sudo cat /usr/share/ca-certificates/41162.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-944360 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-944360 ssh "sudo systemctl is-active docker": exit status 1 (374.544715ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-944360 ssh "sudo systemctl is-active crio": exit status 1 (286.807711ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-944360 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-944360 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-944360 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-944360 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 39345: os: process already finished
helpers_test.go:520: unable to terminate pid 39151: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-944360 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-944360 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [c1d3b037-7462-43c4-adeb-799276bc8e31] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [c1d3b037-7462-43c4-adeb-799276bc8e31] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003211128s
I1210 05:38:46.448071    4116 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-944360 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.135.83 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-944360 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-944360 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-944360 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-7k5tj" [3837e2d1-ddf1-4136-88f1-3166a40b6bc8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-7k5tj" [3837e2d1-ddf1-4136-88f1-3166a40b6bc8] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003609626s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "438.018313ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "51.601599ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "364.933779ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "53.790146ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-944360 /tmp/TestFunctionalparallelMountCmdany-port522781324/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765345138873542073" to /tmp/TestFunctionalparallelMountCmdany-port522781324/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765345138873542073" to /tmp/TestFunctionalparallelMountCmdany-port522781324/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765345138873542073" to /tmp/TestFunctionalparallelMountCmdany-port522781324/001/test-1765345138873542073
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-944360 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.106501ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:38:59.238242    4116 retry.go:31] will retry after 336.606494ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 05:38 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 05:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 05:38 test-1765345138873542073
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh cat /mount-9p/test-1765345138873542073
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-944360 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [176e5bcb-d1cf-49e3-84c1-58f3d3452b28] Pending
helpers_test.go:353: "busybox-mount" [176e5bcb-d1cf-49e3-84c1-58f3d3452b28] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [176e5bcb-d1cf-49e3-84c1-58f3d3452b28] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [176e5bcb-d1cf-49e3-84c1-58f3d3452b28] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003638165s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-944360 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-944360 /tmp/TestFunctionalparallelMountCmdany-port522781324/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 service list -o json
functional_test.go:1504: Took "558.501754ms" to run "out/minikube-linux-arm64 -p functional-944360 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30273
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30273
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-944360 /tmp/TestFunctionalparallelMountCmdspecific-port3330480614/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-944360 /tmp/TestFunctionalparallelMountCmdspecific-port3330480614/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-944360 ssh "sudo umount -f /mount-9p": exit status 1 (365.208372ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-944360 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-944360 /tmp/TestFunctionalparallelMountCmdspecific-port3330480614/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-944360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup731130698/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-944360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup731130698/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-944360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup731130698/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-944360 ssh "findmnt -T" /mount1: exit status 1 (996.505864ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:39:09.650266    4116 retry.go:31] will retry after 460.342114ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-944360 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-944360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup731130698/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-944360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup731130698/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-944360 /tmp/TestFunctionalparallelMountCmdVerifyCleanup731130698/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-944360 version -o=json --components: (1.31633801s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-944360 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-944360
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-944360
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-944360 image ls --format short --alsologtostderr:
I1210 05:39:18.962420   44820 out.go:360] Setting OutFile to fd 1 ...
I1210 05:39:18.962603   44820 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:39:18.962633   44820 out.go:374] Setting ErrFile to fd 2...
I1210 05:39:18.962654   44820 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:39:18.962911   44820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 05:39:18.963736   44820 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1210 05:39:18.963882   44820 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1210 05:39:18.964411   44820 cli_runner.go:164] Run: docker container inspect functional-944360 --format={{.State.Status}}
I1210 05:39:18.989088   44820 ssh_runner.go:195] Run: systemctl --version
I1210 05:39:18.989140   44820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-944360
I1210 05:39:19.020284   44820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-944360/id_rsa Username:docker}
I1210 05:39:19.129671   44820 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-944360 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.34.3            │ sha256:7ada8f │ 20.7MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.3            │ sha256:cf65ae │ 24.6MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.3            │ sha256:4461da │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 265kB  │
│ docker.io/library/minikube-local-cache-test │ functional-944360  │ sha256:187b8b │ 992B   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:667491 │ 8.03MB │
│ docker.io/kicbase/echo-server               │ functional-944360  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.3            │ sha256:2f2aa2 │ 15.8MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:cbad63 │ 23.1MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-944360 image ls --format table --alsologtostderr:
I1210 05:39:19.531755   44990 out.go:360] Setting OutFile to fd 1 ...
I1210 05:39:19.531944   44990 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:39:19.531971   44990 out.go:374] Setting ErrFile to fd 2...
I1210 05:39:19.531990   44990 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:39:19.532304   44990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 05:39:19.532991   44990 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1210 05:39:19.533179   44990 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1210 05:39:19.533810   44990 cli_runner.go:164] Run: docker container inspect functional-944360 --format={{.State.Status}}
I1210 05:39:19.561265   44990 ssh_runner.go:195] Run: systemctl --version
I1210 05:39:19.561322   44990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-944360
I1210 05:39:19.585112   44990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-944360/id_rsa Username:docker}
I1210 05:39:19.701531   44990 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-944360 image ls --format json --alsologtostderr:
[{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:187b8b0a3596efc82d8108da07255f790e24f4da482c7a2aa9f3e56dbd5d3e50","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-944360"],"size":"992"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-944360"],"size":"2173567"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21134420"},{"id":"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"20717884"},{"id":"sha256:66749159455b3f08c83
18fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8032639"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"23107444"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"15774141"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20389531"},{"id":"sha256:cf65ae6c8f700cc27f57b7305c6e
2b71276a7eed943c559a0091e1e667169896","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"24565565"},{"id":"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"22802766"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"265458"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f9
52adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-944360 image ls --format json --alsologtostderr:
I1210 05:39:19.264214   44896 out.go:360] Setting OutFile to fd 1 ...
I1210 05:39:19.264374   44896 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:39:19.264386   44896 out.go:374] Setting ErrFile to fd 2...
I1210 05:39:19.264392   44896 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:39:19.264669   44896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 05:39:19.265313   44896 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1210 05:39:19.265435   44896 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1210 05:39:19.265971   44896 cli_runner.go:164] Run: docker container inspect functional-944360 --format={{.State.Status}}
I1210 05:39:19.293582   44896 ssh_runner.go:195] Run: systemctl --version
I1210 05:39:19.293646   44896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-944360
I1210 05:39:19.314509   44896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-944360/id_rsa Username:docker}
I1210 05:39:19.430282   44896 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-944360 image ls --format yaml --alsologtostderr:
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20389531"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8032639"
- id: sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "22802766"
- id: sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "15774141"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-944360
size: "2173567"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "23107444"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "265458"
- id: sha256:187b8b0a3596efc82d8108da07255f790e24f4da482c7a2aa9f3e56dbd5d3e50
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-944360
size: "992"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21134420"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "24565565"
- id: sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "20717884"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-944360 image ls --format yaml --alsologtostderr:
I1210 05:39:18.981099   44821 out.go:360] Setting OutFile to fd 1 ...
I1210 05:39:18.981412   44821 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:39:18.981426   44821 out.go:374] Setting ErrFile to fd 2...
I1210 05:39:18.981432   44821 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:39:18.981710   44821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 05:39:18.982512   44821 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1210 05:39:18.982692   44821 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1210 05:39:18.984269   44821 cli_runner.go:164] Run: docker container inspect functional-944360 --format={{.State.Status}}
I1210 05:39:19.010399   44821 ssh_runner.go:195] Run: systemctl --version
I1210 05:39:19.010460   44821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-944360
I1210 05:39:19.032781   44821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-944360/id_rsa Username:docker}
I1210 05:39:19.141556   44821 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-944360 ssh pgrep buildkitd: exit status 1 (384.453377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image build -t localhost/my-image:functional-944360 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-944360 image build -t localhost/my-image:functional-944360 testdata/build --alsologtostderr: (3.412781412s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-944360 image build -t localhost/my-image:functional-944360 testdata/build --alsologtostderr:
I1210 05:39:19.630436   45007 out.go:360] Setting OutFile to fd 1 ...
I1210 05:39:19.630591   45007 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:39:19.630603   45007 out.go:374] Setting ErrFile to fd 2...
I1210 05:39:19.630608   45007 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:39:19.630878   45007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 05:39:19.631555   45007 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1210 05:39:19.633481   45007 config.go:182] Loaded profile config "functional-944360": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1210 05:39:19.634108   45007 cli_runner.go:164] Run: docker container inspect functional-944360 --format={{.State.Status}}
I1210 05:39:19.654581   45007 ssh_runner.go:195] Run: systemctl --version
I1210 05:39:19.654630   45007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-944360
I1210 05:39:19.673216   45007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-944360/id_rsa Username:docker}
I1210 05:39:19.789466   45007 build_images.go:162] Building image from path: /tmp/build.3688377510.tar
I1210 05:39:19.789537   45007 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 05:39:19.798836   45007 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3688377510.tar
I1210 05:39:19.802599   45007 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3688377510.tar: stat -c "%s %y" /var/lib/minikube/build/build.3688377510.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3688377510.tar': No such file or directory
I1210 05:39:19.802629   45007 ssh_runner.go:362] scp /tmp/build.3688377510.tar --> /var/lib/minikube/build/build.3688377510.tar (3072 bytes)
I1210 05:39:19.820572   45007 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3688377510
I1210 05:39:19.828418   45007 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3688377510 -xf /var/lib/minikube/build/build.3688377510.tar
I1210 05:39:19.836462   45007 containerd.go:394] Building image: /var/lib/minikube/build/build.3688377510
I1210 05:39:19.836563   45007 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3688377510 --local dockerfile=/var/lib/minikube/build/build.3688377510 --output type=image,name=localhost/my-image:functional-944360
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:0f0792034ddf5b75a993ac52440f6a58680723513a834a13deb0ba975cf30894 0.0s done
#8 exporting config sha256:37c87e5875640b5ba4f41c8c4e182ceec1e7cab45e1d37d88ca587bf217bf094 0.0s done
#8 naming to localhost/my-image:functional-944360 done
#8 DONE 0.2s
I1210 05:39:22.938943   45007 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3688377510 --local dockerfile=/var/lib/minikube/build/build.3688377510 --output type=image,name=localhost/my-image:functional-944360: (3.102353351s)
I1210 05:39:22.939031   45007 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3688377510
I1210 05:39:22.954000   45007 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3688377510.tar
I1210 05:39:22.961204   45007 build_images.go:218] Built localhost/my-image:functional-944360 from /tmp/build.3688377510.tar
I1210 05:39:22.961242   45007 build_images.go:134] succeeded building to: functional-944360
I1210 05:39:22.961247   45007 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
2025/12/10 05:39:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-944360
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image load --daemon kicbase/echo-server:functional-944360 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-944360 image load --daemon kicbase/echo-server:functional-944360 --alsologtostderr: (1.054302605s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image load --daemon kicbase/echo-server:functional-944360 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-944360
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image load --daemon kicbase/echo-server:functional-944360 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image save kicbase/echo-server:functional-944360 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image rm kicbase/echo-server:functional-944360 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-944360
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-944360 image save --daemon kicbase/echo-server:functional-944360 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-944360
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-944360
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-944360
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-944360
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22094-2307/.minikube/files/etc/test/nested/copy/4116/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-644034 cache add registry.k8s.io/pause:3.1: (1.148465041s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-644034 cache add registry.k8s.io/pause:3.3: (1.114532721s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-644034 cache add registry.k8s.io/pause:latest: (1.087321552s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC2466655160/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 cache add minikube-local-cache-test:functional-644034
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 cache delete minikube-local-cache-test:functional-644034
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-644034
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.245974ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi817496570/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 config get cpus: exit status 14 (76.728853ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 config get cpus: exit status 14 (57.384648ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644034 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-644034 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (222.677979ms)

                                                
                                                
-- stdout --
	* [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:08:45.207188   75026 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:08:45.207495   75026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:08:45.207502   75026 out.go:374] Setting ErrFile to fd 2...
	I1210 06:08:45.207508   75026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:08:45.207863   75026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:08:45.208846   75026 out.go:368] Setting JSON to false
	I1210 06:08:45.210333   75026 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3076,"bootTime":1765343850,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:08:45.210473   75026 start.go:143] virtualization:  
	I1210 06:08:45.219826   75026 out.go:179] * [functional-644034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:08:45.223109   75026 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:08:45.223141   75026 notify.go:221] Checking for updates...
	I1210 06:08:45.229278   75026 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:08:45.233214   75026 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:08:45.236426   75026 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:08:45.239672   75026 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:08:45.242718   75026 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:08:45.246107   75026 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:08:45.246864   75026 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:08:45.275834   75026 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:08:45.275960   75026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:08:45.341838   75026 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:08:45.332498414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:08:45.341951   75026 docker.go:319] overlay module found
	I1210 06:08:45.345011   75026 out.go:179] * Using the docker driver based on existing profile
	I1210 06:08:45.347962   75026 start.go:309] selected driver: docker
	I1210 06:08:45.347984   75026 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:08:45.348091   75026 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:08:45.351517   75026 out.go:203] 
	W1210 06:08:45.354388   75026 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 06:08:45.357111   75026 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644034 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-644034 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-644034 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (246.864121ms)

                                                
                                                
-- stdout --
	* [functional-644034] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:08:44.939078   74980 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:08:44.939282   74980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:08:44.939294   74980 out.go:374] Setting ErrFile to fd 2...
	I1210 06:08:44.939299   74980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:08:44.939660   74980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:08:44.940024   74980 out.go:368] Setting JSON to false
	I1210 06:08:44.940802   74980 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3075,"bootTime":1765343850,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:08:44.940868   74980 start.go:143] virtualization:  
	I1210 06:08:44.944259   74980 out.go:179] * [functional-644034] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1210 06:08:44.947843   74980 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:08:44.947961   74980 notify.go:221] Checking for updates...
	I1210 06:08:44.953680   74980 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:08:44.956468   74980 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:08:44.959155   74980 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:08:44.961892   74980 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:08:44.964664   74980 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:08:44.968139   74980 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:08:44.968786   74980 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:08:44.992488   74980 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:08:44.992615   74980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:08:45.115731   74980 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:08:45.0964514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:08:45.115847   74980 docker.go:319] overlay module found
	I1210 06:08:45.119519   74980 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 06:08:45.122814   74980 start.go:309] selected driver: docker
	I1210 06:08:45.122843   74980 start.go:927] validating driver "docker" against &{Name:functional-644034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-644034 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:08:45.122970   74980 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:08:45.126755   74980 out.go:203] 
	W1210 06:08:45.129986   74980 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 06:08:45.133111   74980 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh -n functional-644034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 cp functional-644034:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm512016206/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh -n functional-644034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh -n functional-644034 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4116/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "sudo cat /etc/test/nested/copy/4116/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4116.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "sudo cat /etc/ssl/certs/4116.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4116.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "sudo cat /usr/share/ca-certificates/4116.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "sudo cat /etc/ssl/certs/41162.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "sudo cat /usr/share/ca-certificates/41162.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 ssh "sudo systemctl is-active docker": exit status 1 (298.880463ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 ssh "sudo systemctl is-active crio": exit status 1 (276.124281ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-644034 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-644034 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "329.653329ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.525928ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "320.820055ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "57.010789ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3916605591/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (380.758533ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:08:38.502416    4116 retry.go:31] will retry after 393.792439ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3916605591/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 ssh "sudo umount -f /mount-9p": exit status 1 (260.386688ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-644034 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3916605591/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 ssh "findmnt -T" /mount1: exit status 1 (617.038536ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 06:08:40.543595    4116 retry.go:31] will retry after 402.758465ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-644034 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-644034 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1525253621/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-644034 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-644034
docker.io/kicbase/echo-server:functional-644034
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-644034 image ls --format short --alsologtostderr:
I1210 06:08:57.714942   77195 out.go:360] Setting OutFile to fd 1 ...
I1210 06:08:57.715191   77195 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:08:57.715223   77195 out.go:374] Setting ErrFile to fd 2...
I1210 06:08:57.715243   77195 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:08:57.715517   77195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 06:08:57.716138   77195 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:08:57.716301   77195 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:08:57.716844   77195 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
I1210 06:08:57.733554   77195 ssh_runner.go:195] Run: systemctl --version
I1210 06:08:57.733601   77195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
I1210 06:08:57.751738   77195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
I1210 06:08:57.853561   77195 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-644034 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy                  │ v1.35.0-rc.1      │ sha256:7e3ace │ 22.4MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-rc.1      │ sha256:abca4d │ 15.4MB │
│ registry.k8s.io/pause                       │ 3.1               │ sha256:8057e0 │ 262kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ sha256:667491 │ 8.03MB │
│ registry.k8s.io/etcd                        │ 3.6.6-0           │ sha256:271e49 │ 21.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-rc.1      │ sha256:a34b34 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ sha256:d7b100 │ 265kB  │
│ registry.k8s.io/pause                       │ 3.3               │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest            │ sha256:8cb209 │ 71.3kB │
│ docker.io/kicbase/echo-server               │ functional-644034 │ sha256:ce2d2c │ 2.17MB │
│ docker.io/library/minikube-local-cache-test │ functional-644034 │ sha256:187b8b │ 992B   │
│ localhost/my-image                          │ functional-644034 │ sha256:105c58 │ 831kB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-rc.1      │ sha256:3c6ba2 │ 24.7MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-644034 image ls --format table --alsologtostderr:
I1210 06:09:02.183617   77593 out.go:360] Setting OutFile to fd 1 ...
I1210 06:09:02.183738   77593 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:09:02.183765   77593 out.go:374] Setting ErrFile to fd 2...
I1210 06:09:02.183771   77593 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:09:02.184028   77593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 06:09:02.184655   77593 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:09:02.184778   77593 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:09:02.185347   77593 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
I1210 06:09:02.203713   77593 ssh_runner.go:195] Run: systemctl --version
I1210 06:09:02.203782   77593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
I1210 06:09:02.222437   77593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
I1210 06:09:02.330029   77593 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-644034 image ls --format json --alsologtostderr:
[{"id":"sha256:187b8b0a3596efc82d8108da07255f790e24f4da482c7a2aa9f3e56dbd5d3e50","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-644034"],"size":"992"},{"id":"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8032639"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21166088"},{"id":"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"24690149"},{"id":"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"20670083"},{"id":"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e","repoDigests":[],"repoTags":["registry.k8s.i
o/kube-proxy:v1.35.0-rc.1"],"size":"22430795"},{"id":"sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"15403461"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"265458"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-644034"],"size":"2173567"},{"id":"sha256:105c584c08623efaed11abb744866aab83b40c7c1531df4183e9b5ca9d16d699","repoDigests":[],"repoTags":["localhost/my-image:functional-644034"],"size":"830618"},{"id":"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"21748497"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"s
ize":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-644034 image ls --format json --alsologtostderr:
I1210 06:09:01.946476   77550 out.go:360] Setting OutFile to fd 1 ...
I1210 06:09:01.946715   77550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:09:01.946766   77550 out.go:374] Setting ErrFile to fd 2...
I1210 06:09:01.946787   77550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:09:01.947144   77550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 06:09:01.947804   77550 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:09:01.947983   77550 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:09:01.948561   77550 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
I1210 06:09:01.968980   77550 ssh_runner.go:195] Run: systemctl --version
I1210 06:09:01.969033   77550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
I1210 06:09:01.989651   77550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
I1210 06:09:02.093807   77550 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-644034 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-644034
size: "2173567"
- id: sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8032639"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21166088"
- id: sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "20670083"
- id: sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "22430795"
- id: sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "15403461"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:187b8b0a3596efc82d8108da07255f790e24f4da482c7a2aa9f3e56dbd5d3e50
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-644034
size: "992"
- id: sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "21748497"
- id: sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "24690149"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "265458"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-644034 image ls --format yaml --alsologtostderr:
I1210 06:08:57.931768   77232 out.go:360] Setting OutFile to fd 1 ...
I1210 06:08:57.931899   77232 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:08:57.931908   77232 out.go:374] Setting ErrFile to fd 2...
I1210 06:08:57.931914   77232 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:08:57.932182   77232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 06:08:57.932787   77232 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:08:57.932910   77232 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:08:57.933472   77232 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
I1210 06:08:57.954009   77232 ssh_runner.go:195] Run: systemctl --version
I1210 06:08:57.954061   77232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
I1210 06:08:57.972034   77232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
I1210 06:08:58.077772   77232 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-644034 ssh pgrep buildkitd: exit status 1 (291.091314ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image build -t localhost/my-image:functional-644034 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-644034 image build -t localhost/my-image:functional-644034 testdata/build --alsologtostderr: (3.254121671s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-644034 image build -t localhost/my-image:functional-644034 testdata/build --alsologtostderr:
I1210 06:08:58.449542   77337 out.go:360] Setting OutFile to fd 1 ...
I1210 06:08:58.449710   77337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:08:58.449740   77337 out.go:374] Setting ErrFile to fd 2...
I1210 06:08:58.449760   77337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:08:58.450042   77337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
I1210 06:08:58.450735   77337 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:08:58.451407   77337 config.go:182] Loaded profile config "functional-644034": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1210 06:08:58.452002   77337 cli_runner.go:164] Run: docker container inspect functional-644034 --format={{.State.Status}}
I1210 06:08:58.469333   77337 ssh_runner.go:195] Run: systemctl --version
I1210 06:08:58.469384   77337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644034
I1210 06:08:58.487316   77337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/functional-644034/id_rsa Username:docker}
I1210 06:08:58.589401   77337 build_images.go:162] Building image from path: /tmp/build.477607041.tar
I1210 06:08:58.589483   77337 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 06:08:58.596791   77337 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.477607041.tar
I1210 06:08:58.600230   77337 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.477607041.tar: stat -c "%s %y" /var/lib/minikube/build/build.477607041.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.477607041.tar': No such file or directory
I1210 06:08:58.600265   77337 ssh_runner.go:362] scp /tmp/build.477607041.tar --> /var/lib/minikube/build/build.477607041.tar (3072 bytes)
I1210 06:08:58.616787   77337 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.477607041
I1210 06:08:58.624046   77337 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.477607041 -xf /var/lib/minikube/build/build.477607041.tar
I1210 06:08:58.631817   77337 containerd.go:394] Building image: /var/lib/minikube/build/build.477607041
I1210 06:08:58.631890   77337 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.477607041 --local dockerfile=/var/lib/minikube/build/build.477607041 --output type=image,name=localhost/my-image:functional-644034
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:7e6b6f3c1bb7bb5391af935b79345dd0bed30cbe9773e092ce18420ec1b2f3c3
#8 exporting manifest sha256:7e6b6f3c1bb7bb5391af935b79345dd0bed30cbe9773e092ce18420ec1b2f3c3 0.0s done
#8 exporting config sha256:105c584c08623efaed11abb744866aab83b40c7c1531df4183e9b5ca9d16d699 0.0s done
#8 naming to localhost/my-image:functional-644034 done
#8 DONE 0.2s
I1210 06:09:01.636045   77337 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.477607041 --local dockerfile=/var/lib/minikube/build/build.477607041 --output type=image,name=localhost/my-image:functional-644034: (3.004115962s)
I1210 06:09:01.636119   77337 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.477607041
I1210 06:09:01.644153   77337 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.477607041.tar
I1210 06:09:01.651818   77337 build_images.go:218] Built localhost/my-image:functional-644034 from /tmp/build.477607041.tar
I1210 06:09:01.651848   77337 build_images.go:134] succeeded building to: functional-644034
I1210 06:09:01.651854   77337 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-644034
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image load --daemon kicbase/echo-server:functional-644034 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image load --daemon kicbase/echo-server:functional-644034 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-644034
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image load --daemon kicbase/echo-server:functional-644034 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image save kicbase/echo-server:functional-644034 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image rm kicbase/echo-server:functional-644034 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-644034
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 image save --daemon kicbase/echo-server:functional-644034 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-644034
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-644034 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-644034
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-644034
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-644034
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (161.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1210 06:11:38.876233    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:38.884601    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:38.895898    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:38.917244    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:38.958555    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:39.039909    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:39.201364    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:39.523126    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:40.164675    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:41.445982    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:44.007291    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:44.571504    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:49.129541    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:11:59.371834    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:12:19.853186    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:13:00.816003    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m40.187504185s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (161.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 kubectl -- rollout status deployment/busybox: (4.872817807s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-484vg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-cb5xt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-t9b5c -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-484vg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-cb5xt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-t9b5c -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-484vg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-cb5xt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-t9b5c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-484vg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-484vg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-cb5xt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-cb5xt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-t9b5c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 kubectl -- exec busybox-7b57f96db7-t9b5c -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 node add --alsologtostderr -v 5
E1210 06:13:37.012070    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 node add --alsologtostderr -v 5: (31.679619242s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5: (1.085260348s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-753173 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.118046298s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 status --output json --alsologtostderr -v 5: (1.076210587s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp testdata/cp-test.txt ha-753173:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile377593162/001/cp-test_ha-753173.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173:/home/docker/cp-test.txt ha-753173-m02:/home/docker/cp-test_ha-753173_ha-753173-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m02 "sudo cat /home/docker/cp-test_ha-753173_ha-753173-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173:/home/docker/cp-test.txt ha-753173-m03:/home/docker/cp-test_ha-753173_ha-753173-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m03 "sudo cat /home/docker/cp-test_ha-753173_ha-753173-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173:/home/docker/cp-test.txt ha-753173-m04:/home/docker/cp-test_ha-753173_ha-753173-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m04 "sudo cat /home/docker/cp-test_ha-753173_ha-753173-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp testdata/cp-test.txt ha-753173-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile377593162/001/cp-test_ha-753173-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m02:/home/docker/cp-test.txt ha-753173:/home/docker/cp-test_ha-753173-m02_ha-753173.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173 "sudo cat /home/docker/cp-test_ha-753173-m02_ha-753173.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m02:/home/docker/cp-test.txt ha-753173-m03:/home/docker/cp-test_ha-753173-m02_ha-753173-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m03 "sudo cat /home/docker/cp-test_ha-753173-m02_ha-753173-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m02:/home/docker/cp-test.txt ha-753173-m04:/home/docker/cp-test_ha-753173-m02_ha-753173-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m04 "sudo cat /home/docker/cp-test_ha-753173-m02_ha-753173-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp testdata/cp-test.txt ha-753173-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile377593162/001/cp-test_ha-753173-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m03:/home/docker/cp-test.txt ha-753173:/home/docker/cp-test_ha-753173-m03_ha-753173.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173 "sudo cat /home/docker/cp-test_ha-753173-m03_ha-753173.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m03:/home/docker/cp-test.txt ha-753173-m02:/home/docker/cp-test_ha-753173-m03_ha-753173-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m02 "sudo cat /home/docker/cp-test_ha-753173-m03_ha-753173-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m03:/home/docker/cp-test.txt ha-753173-m04:/home/docker/cp-test_ha-753173-m03_ha-753173-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m04 "sudo cat /home/docker/cp-test_ha-753173-m03_ha-753173-m04.txt"
E1210 06:14:22.738137    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp testdata/cp-test.txt ha-753173-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile377593162/001/cp-test_ha-753173-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m04:/home/docker/cp-test.txt ha-753173:/home/docker/cp-test_ha-753173-m04_ha-753173.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173 "sudo cat /home/docker/cp-test_ha-753173-m04_ha-753173.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m04:/home/docker/cp-test.txt ha-753173-m02:/home/docker/cp-test_ha-753173-m04_ha-753173-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m02 "sudo cat /home/docker/cp-test_ha-753173-m04_ha-753173-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 cp ha-753173-m04:/home/docker/cp-test.txt ha-753173-m03:/home/docker/cp-test_ha-753173-m04_ha-753173-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 ssh -n ha-753173-m03 "sudo cat /home/docker/cp-test_ha-753173-m04_ha-753173-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 node stop m02 --alsologtostderr -v 5: (12.092167657s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5: exit status 7 (804.008928ms)

                                                
                                                
-- stdout --
	ha-753173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-753173-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-753173-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-753173-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:14:39.954212   96025 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:14:39.954387   96025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:39.954418   96025 out.go:374] Setting ErrFile to fd 2...
	I1210 06:14:39.954438   96025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:39.954708   96025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:14:39.954909   96025 out.go:368] Setting JSON to false
	I1210 06:14:39.954971   96025 mustload.go:66] Loading cluster: ha-753173
	I1210 06:14:39.955063   96025 notify.go:221] Checking for updates...
	I1210 06:14:39.959311   96025 config.go:182] Loaded profile config "ha-753173": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1210 06:14:39.959347   96025 status.go:174] checking status of ha-753173 ...
	I1210 06:14:39.960570   96025 cli_runner.go:164] Run: docker container inspect ha-753173 --format={{.State.Status}}
	I1210 06:14:39.982972   96025 status.go:371] ha-753173 host status = "Running" (err=<nil>)
	I1210 06:14:39.982992   96025 host.go:66] Checking if "ha-753173" exists ...
	I1210 06:14:39.983378   96025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-753173
	I1210 06:14:40.018409   96025 host.go:66] Checking if "ha-753173" exists ...
	I1210 06:14:40.018744   96025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:14:40.018804   96025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-753173
	I1210 06:14:40.041148   96025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/ha-753173/id_rsa Username:docker}
	I1210 06:14:40.149153   96025 ssh_runner.go:195] Run: systemctl --version
	I1210 06:14:40.156199   96025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:14:40.171693   96025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:14:40.232077   96025 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-10 06:14:40.222521903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:14:40.232651   96025 kubeconfig.go:125] found "ha-753173" server: "https://192.168.49.254:8443"
	I1210 06:14:40.232693   96025 api_server.go:166] Checking apiserver status ...
	I1210 06:14:40.232739   96025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:14:40.246434   96025 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2083/cgroup
	I1210 06:14:40.254902   96025 api_server.go:182] apiserver freezer: "12:freezer:/docker/de34178198ae8f50f6f53e7b595cf34be9216b6651777615f2e0e16c0ce293db/kubepods/burstable/pod3479e6cdcf30b475a7e17efe9527388e/6fa2a3968e77b4933396fcc5d6f2abe559fa2a281706ed2543efb31bdea822a8"
	I1210 06:14:40.254984   96025 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/de34178198ae8f50f6f53e7b595cf34be9216b6651777615f2e0e16c0ce293db/kubepods/burstable/pod3479e6cdcf30b475a7e17efe9527388e/6fa2a3968e77b4933396fcc5d6f2abe559fa2a281706ed2543efb31bdea822a8/freezer.state
	I1210 06:14:40.263344   96025 api_server.go:204] freezer state: "THAWED"
	I1210 06:14:40.263383   96025 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 06:14:40.271623   96025 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 06:14:40.271652   96025 status.go:463] ha-753173 apiserver status = Running (err=<nil>)
	I1210 06:14:40.271663   96025 status.go:176] ha-753173 status: &{Name:ha-753173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:14:40.271679   96025 status.go:174] checking status of ha-753173-m02 ...
	I1210 06:14:40.271984   96025 cli_runner.go:164] Run: docker container inspect ha-753173-m02 --format={{.State.Status}}
	I1210 06:14:40.289387   96025 status.go:371] ha-753173-m02 host status = "Stopped" (err=<nil>)
	I1210 06:14:40.289411   96025 status.go:384] host is not running, skipping remaining checks
	I1210 06:14:40.289417   96025 status.go:176] ha-753173-m02 status: &{Name:ha-753173-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:14:40.289437   96025 status.go:174] checking status of ha-753173-m03 ...
	I1210 06:14:40.289734   96025 cli_runner.go:164] Run: docker container inspect ha-753173-m03 --format={{.State.Status}}
	I1210 06:14:40.308760   96025 status.go:371] ha-753173-m03 host status = "Running" (err=<nil>)
	I1210 06:14:40.308792   96025 host.go:66] Checking if "ha-753173-m03" exists ...
	I1210 06:14:40.309103   96025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-753173-m03
	I1210 06:14:40.327717   96025 host.go:66] Checking if "ha-753173-m03" exists ...
	I1210 06:14:40.328022   96025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:14:40.328070   96025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-753173-m03
	I1210 06:14:40.345661   96025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/ha-753173-m03/id_rsa Username:docker}
	I1210 06:14:40.448298   96025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:14:40.462364   96025 kubeconfig.go:125] found "ha-753173" server: "https://192.168.49.254:8443"
	I1210 06:14:40.462394   96025 api_server.go:166] Checking apiserver status ...
	I1210 06:14:40.462435   96025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:14:40.474650   96025 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1660/cgroup
	I1210 06:14:40.484135   96025 api_server.go:182] apiserver freezer: "12:freezer:/docker/214bfdcd3b3431152824bfe03fc860c33fa94a9f2aa216aa1c070ff10ee6702a/kubepods/burstable/pode57c66f0a6ec3eb0ae3101c4cd78cf71/ccda0d7dd4e8414c46eacaca315bcfb6f823ffcec37d6a1b20d6f2a98c3579fe"
	I1210 06:14:40.484247   96025 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/214bfdcd3b3431152824bfe03fc860c33fa94a9f2aa216aa1c070ff10ee6702a/kubepods/burstable/pode57c66f0a6ec3eb0ae3101c4cd78cf71/ccda0d7dd4e8414c46eacaca315bcfb6f823ffcec37d6a1b20d6f2a98c3579fe/freezer.state
	I1210 06:14:40.492765   96025 api_server.go:204] freezer state: "THAWED"
	I1210 06:14:40.492838   96025 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 06:14:40.501041   96025 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 06:14:40.501069   96025 status.go:463] ha-753173-m03 apiserver status = Running (err=<nil>)
	I1210 06:14:40.501079   96025 status.go:176] ha-753173-m03 status: &{Name:ha-753173-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:14:40.501119   96025 status.go:174] checking status of ha-753173-m04 ...
	I1210 06:14:40.501435   96025 cli_runner.go:164] Run: docker container inspect ha-753173-m04 --format={{.State.Status}}
	I1210 06:14:40.518952   96025 status.go:371] ha-753173-m04 host status = "Running" (err=<nil>)
	I1210 06:14:40.518975   96025 host.go:66] Checking if "ha-753173-m04" exists ...
	I1210 06:14:40.519343   96025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-753173-m04
	I1210 06:14:40.537727   96025 host.go:66] Checking if "ha-753173-m04" exists ...
	I1210 06:14:40.538060   96025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:14:40.538107   96025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-753173-m04
	I1210 06:14:40.555703   96025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/ha-753173-m04/id_rsa Username:docker}
	I1210 06:14:40.660165   96025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:14:40.684741   96025 status.go:176] ha-753173-m04 status: &{Name:ha-753173-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (15.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 node start m02 --alsologtostderr -v 5: (13.240185558s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5: (1.865840798s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (15.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.315223101s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 stop --alsologtostderr -v 5: (37.653866519s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 start --wait true --alsologtostderr -v 5: (1m0.599260053s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 node delete m03 --alsologtostderr -v 5
E1210 06:16:38.876142    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:40.085464    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:16:44.571616    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 node delete m03 --alsologtostderr -v 5: (9.743165346s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 stop --alsologtostderr -v 5
E1210 06:17:06.581071    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 stop --alsologtostderr -v 5: (36.593587182s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5: exit status 7 (112.14229ms)

                                                
                                                
-- stdout --
	ha-753173
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-753173-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-753173-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:17:24.684296  110997 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:17:24.684481  110997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:17:24.684507  110997 out.go:374] Setting ErrFile to fd 2...
	I1210 06:17:24.684526  110997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:17:24.684837  110997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:17:24.685076  110997 out.go:368] Setting JSON to false
	I1210 06:17:24.685136  110997 mustload.go:66] Loading cluster: ha-753173
	I1210 06:17:24.685232  110997 notify.go:221] Checking for updates...
	I1210 06:17:24.685642  110997 config.go:182] Loaded profile config "ha-753173": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1210 06:17:24.685683  110997 status.go:174] checking status of ha-753173 ...
	I1210 06:17:24.686523  110997 cli_runner.go:164] Run: docker container inspect ha-753173 --format={{.State.Status}}
	I1210 06:17:24.704667  110997 status.go:371] ha-753173 host status = "Stopped" (err=<nil>)
	I1210 06:17:24.704737  110997 status.go:384] host is not running, skipping remaining checks
	I1210 06:17:24.704758  110997 status.go:176] ha-753173 status: &{Name:ha-753173 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:17:24.704823  110997 status.go:174] checking status of ha-753173-m02 ...
	I1210 06:17:24.705210  110997 cli_runner.go:164] Run: docker container inspect ha-753173-m02 --format={{.State.Status}}
	I1210 06:17:24.728642  110997 status.go:371] ha-753173-m02 host status = "Stopped" (err=<nil>)
	I1210 06:17:24.728673  110997 status.go:384] host is not running, skipping remaining checks
	I1210 06:17:24.728681  110997 status.go:176] ha-753173-m02 status: &{Name:ha-753173-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:17:24.728699  110997 status.go:174] checking status of ha-753173-m04 ...
	I1210 06:17:24.728992  110997 cli_runner.go:164] Run: docker container inspect ha-753173-m04 --format={{.State.Status}}
	I1210 06:17:24.749378  110997 status.go:371] ha-753173-m04 host status = "Stopped" (err=<nil>)
	I1210 06:17:24.749398  110997 status.go:384] host is not running, skipping remaining checks
	I1210 06:17:24.749405  110997 status.go:176] ha-753173-m04 status: &{Name:ha-753173-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (62.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m1.591923583s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (62.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (61.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 node add --control-plane --alsologtostderr -v 5
E1210 06:18:37.012249    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 node add --control-plane --alsologtostderr -v 5: (1m0.779513595s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-753173 status --alsologtostderr -v 5: (1.143051943s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (61.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.070259335s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.56s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-810267 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-810267 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (58.551043942s)
--- PASS: TestJSONOutput/start/Command (58.56s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-810267 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-810267 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-810267 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-810267 --output=json --user=testUser: (6.012822102s)
--- PASS: TestJSONOutput/stop/Command (6.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-377566 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-377566 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.608399ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bea9dbdf-5ad0-49e3-a04e-458a0b8cc108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-377566] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d2ad390-0bc6-4dc4-83da-53df1ff2fe19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22094"}}
	{"specversion":"1.0","id":"b6a68219-9cb4-4bc3-a903-319a795956ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e4eca31e-f69d-4a7b-b04c-68d48c1dfe0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig"}}
	{"specversion":"1.0","id":"e20638f0-31f8-44b4-bf5e-4dea903352dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube"}}
	{"specversion":"1.0","id":"47d3fe45-9c24-402d-b29a-4656dc2a42c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1e7254c4-0340-4843-9f6e-4625fa40ab21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b6b5749a-2012-4888-9c6e-6d0efde4fa35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-377566" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-377566
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-853027 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-853027 --network=: (38.01915301s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-853027" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-853027
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-853027: (2.317330665s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.37s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-904179 --network=bridge
E1210 06:21:38.876241    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:21:44.571573    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-904179 --network=bridge: (35.628100648s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-904179" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-904179
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-904179: (2.126051474s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.79s)

                                                
                                    
x
+
TestKicExistingNetwork (43.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1210 06:22:09.510844    4116 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 06:22:09.526532    4116 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 06:22:09.526613    4116 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1210 06:22:09.526634    4116 cli_runner.go:164] Run: docker network inspect existing-network
W1210 06:22:09.543686    4116 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1210 06:22:09.543715    4116 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1210 06:22:09.543734    4116 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1210 06:22:09.543832    4116 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 06:22:09.560641    4116 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4d091f932c27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:e7:11:3f:d3:8a} reservation:<nil>}
I1210 06:22:09.560898    4116 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017c5660}
I1210 06:22:09.560918    4116 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1210 06:22:09.560964    4116 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1210 06:22:09.624683    4116 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-712307 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-712307 --network=existing-network: (41.230008605s)
helpers_test.go:176: Cleaning up "existing-network-712307" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-712307
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-712307: (2.16875974s)
I1210 06:22:53.040107    4116 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (43.55s)

                                                
                                    
x
+
TestKicCustomSubnet (40.04s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-441742 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-441742 --subnet=192.168.60.0/24: (37.693931854s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-441742 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-441742" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-441742
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-441742: (2.31646147s)
--- PASS: TestKicCustomSubnet (40.04s)

                                                
                                    
x
+
TestKicStaticIP (41.86s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-528304 --static-ip=192.168.200.200
E1210 06:23:37.012796    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-528304 --static-ip=192.168.200.200: (39.516536475s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-528304 ip
helpers_test.go:176: Cleaning up "static-ip-528304" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-528304
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-528304: (2.185356803s)
--- PASS: TestKicStaticIP (41.86s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (88.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-615122 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-615122 --driver=docker  --container-runtime=containerd: (40.189949607s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-618248 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-618248 --driver=docker  --container-runtime=containerd: (42.034663181s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-615122
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-618248
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-618248" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-618248
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-618248: (2.168417468s)
helpers_test.go:176: Cleaning up "first-615122" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-615122
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-615122: (2.380090612s)
--- PASS: TestMinikubeProfile (88.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-897274 --memory=3072 --mount-string /tmp/TestMountStartserial2160956374/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-897274 --memory=3072 --mount-string /tmp/TestMountStartserial2160956374/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.424064566s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-897274 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-899342 --memory=3072 --mount-string /tmp/TestMountStartserial2160956374/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-899342 --memory=3072 --mount-string /tmp/TestMountStartserial2160956374/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.455011685s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-899342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-897274 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-897274 --alsologtostderr -v=5: (1.723460727s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-899342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-899342
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-899342: (1.289339967s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.45s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-899342
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-899342: (6.45163193s)
--- PASS: TestMountStart/serial/RestartStopped (7.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-899342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-472894 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1210 06:26:27.642635    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:26:38.876060    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:26:44.571393    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-472894 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m26.047547085s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-472894 -- rollout status deployment/busybox: (3.541262051s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- exec busybox-7b57f96db7-nbdtg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- exec busybox-7b57f96db7-w6xmt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- exec busybox-7b57f96db7-nbdtg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- exec busybox-7b57f96db7-w6xmt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- exec busybox-7b57f96db7-nbdtg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- exec busybox-7b57f96db7-w6xmt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- exec busybox-7b57f96db7-nbdtg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- exec busybox-7b57f96db7-nbdtg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- exec busybox-7b57f96db7-w6xmt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472894 -- exec busybox-7b57f96db7-w6xmt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-472894 -v=5 --alsologtostderr
E1210 06:28:01.942806    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-472894 -v=5 --alsologtostderr: (28.527181038s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-472894 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp testdata/cp-test.txt multinode-472894:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp multinode-472894:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1761430123/001/cp-test_multinode-472894.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp multinode-472894:/home/docker/cp-test.txt multinode-472894-m02:/home/docker/cp-test_multinode-472894_multinode-472894-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m02 "sudo cat /home/docker/cp-test_multinode-472894_multinode-472894-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp multinode-472894:/home/docker/cp-test.txt multinode-472894-m03:/home/docker/cp-test_multinode-472894_multinode-472894-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m03 "sudo cat /home/docker/cp-test_multinode-472894_multinode-472894-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp testdata/cp-test.txt multinode-472894-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp multinode-472894-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1761430123/001/cp-test_multinode-472894-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp multinode-472894-m02:/home/docker/cp-test.txt multinode-472894:/home/docker/cp-test_multinode-472894-m02_multinode-472894.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894 "sudo cat /home/docker/cp-test_multinode-472894-m02_multinode-472894.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp multinode-472894-m02:/home/docker/cp-test.txt multinode-472894-m03:/home/docker/cp-test_multinode-472894-m02_multinode-472894-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m03 "sudo cat /home/docker/cp-test_multinode-472894-m02_multinode-472894-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp testdata/cp-test.txt multinode-472894-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp multinode-472894-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1761430123/001/cp-test_multinode-472894-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp multinode-472894-m03:/home/docker/cp-test.txt multinode-472894:/home/docker/cp-test_multinode-472894-m03_multinode-472894.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894 "sudo cat /home/docker/cp-test_multinode-472894-m03_multinode-472894.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 cp multinode-472894-m03:/home/docker/cp-test.txt multinode-472894-m02:/home/docker/cp-test_multinode-472894-m03_multinode-472894-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 ssh -n multinode-472894-m02 "sudo cat /home/docker/cp-test_multinode-472894-m03_multinode-472894-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-472894 node stop m03: (1.310283942s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-472894 status: exit status 7 (539.880956ms)

                                                
                                                
-- stdout --
	multinode-472894
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-472894-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-472894-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-472894 status --alsologtostderr: exit status 7 (552.543808ms)

                                                
                                                
-- stdout --
	multinode-472894
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-472894-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-472894-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:28:29.227331  169507 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:28:29.227451  169507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:29.227462  169507 out.go:374] Setting ErrFile to fd 2...
	I1210 06:28:29.227468  169507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:28:29.227733  169507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:28:29.228015  169507 out.go:368] Setting JSON to false
	I1210 06:28:29.228055  169507 mustload.go:66] Loading cluster: multinode-472894
	I1210 06:28:29.228107  169507 notify.go:221] Checking for updates...
	I1210 06:28:29.229182  169507 config.go:182] Loaded profile config "multinode-472894": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1210 06:28:29.229224  169507 status.go:174] checking status of multinode-472894 ...
	I1210 06:28:29.229918  169507 cli_runner.go:164] Run: docker container inspect multinode-472894 --format={{.State.Status}}
	I1210 06:28:29.250476  169507 status.go:371] multinode-472894 host status = "Running" (err=<nil>)
	I1210 06:28:29.250503  169507 host.go:66] Checking if "multinode-472894" exists ...
	I1210 06:28:29.250806  169507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-472894
	I1210 06:28:29.279148  169507 host.go:66] Checking if "multinode-472894" exists ...
	I1210 06:28:29.279559  169507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:28:29.279636  169507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472894
	I1210 06:28:29.299048  169507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/multinode-472894/id_rsa Username:docker}
	I1210 06:28:29.404159  169507 ssh_runner.go:195] Run: systemctl --version
	I1210 06:28:29.410489  169507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:28:29.423545  169507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:28:29.488122  169507 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-10 06:28:29.478465133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:28:29.488756  169507 kubeconfig.go:125] found "multinode-472894" server: "https://192.168.67.2:8443"
	I1210 06:28:29.488785  169507 api_server.go:166] Checking apiserver status ...
	I1210 06:28:29.488832  169507 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:28:29.501646  169507 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2067/cgroup
	I1210 06:28:29.509647  169507 api_server.go:182] apiserver freezer: "12:freezer:/docker/bc6024b5fac1c132ea19c96795d7ee45b96861eafebf593bdfea19f72aeb868d/kubepods/burstable/pod706f812fde50a4c5f904d3dc9ae4a832/04efe3a30a9470d77fc3faa19e9b35cc87d4d0f8fa9d59a78572f15654e4f12d"
	I1210 06:28:29.509739  169507 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bc6024b5fac1c132ea19c96795d7ee45b96861eafebf593bdfea19f72aeb868d/kubepods/burstable/pod706f812fde50a4c5f904d3dc9ae4a832/04efe3a30a9470d77fc3faa19e9b35cc87d4d0f8fa9d59a78572f15654e4f12d/freezer.state
	I1210 06:28:29.517321  169507 api_server.go:204] freezer state: "THAWED"
	I1210 06:28:29.517349  169507 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1210 06:28:29.525381  169507 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1210 06:28:29.525409  169507 status.go:463] multinode-472894 apiserver status = Running (err=<nil>)
	I1210 06:28:29.525419  169507 status.go:176] multinode-472894 status: &{Name:multinode-472894 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:28:29.525436  169507 status.go:174] checking status of multinode-472894-m02 ...
	I1210 06:28:29.525773  169507 cli_runner.go:164] Run: docker container inspect multinode-472894-m02 --format={{.State.Status}}
	I1210 06:28:29.543289  169507 status.go:371] multinode-472894-m02 host status = "Running" (err=<nil>)
	I1210 06:28:29.543318  169507 host.go:66] Checking if "multinode-472894-m02" exists ...
	I1210 06:28:29.543641  169507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-472894-m02
	I1210 06:28:29.560347  169507 host.go:66] Checking if "multinode-472894-m02" exists ...
	I1210 06:28:29.560674  169507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:28:29.560729  169507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472894-m02
	I1210 06:28:29.584277  169507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/22094-2307/.minikube/machines/multinode-472894-m02/id_rsa Username:docker}
	I1210 06:28:29.688205  169507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:28:29.701066  169507 status.go:176] multinode-472894-m02 status: &{Name:multinode-472894-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:28:29.701100  169507 status.go:174] checking status of multinode-472894-m03 ...
	I1210 06:28:29.701398  169507 cli_runner.go:164] Run: docker container inspect multinode-472894-m03 --format={{.State.Status}}
	I1210 06:28:29.721161  169507 status.go:371] multinode-472894-m03 host status = "Stopped" (err=<nil>)
	I1210 06:28:29.721186  169507 status.go:384] host is not running, skipping remaining checks
	I1210 06:28:29.721194  169507 status.go:176] multinode-472894-m03 status: &{Name:multinode-472894-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 node start m03 -v=5 --alsologtostderr
E1210 06:28:37.012479    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-472894 node start m03 -v=5 --alsologtostderr: (7.313733473s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-472894
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-472894
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-472894: (25.173108625s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-472894 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-472894 --wait=true -v=5 --alsologtostderr: (47.327407046s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-472894
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-472894 node delete m03: (4.854994242s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-472894 stop: (23.900680778s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-472894 status: exit status 7 (98.192977ms)

                                                
                                                
-- stdout --
	multinode-472894
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-472894-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-472894 status --alsologtostderr: exit status 7 (90.743346ms)

                                                
                                                
-- stdout --
	multinode-472894
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-472894-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:30:20.080498  178342 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:30:20.080618  178342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:30:20.080627  178342 out.go:374] Setting ErrFile to fd 2...
	I1210 06:30:20.080632  178342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:30:20.080886  178342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:30:20.081071  178342 out.go:368] Setting JSON to false
	I1210 06:30:20.081105  178342 mustload.go:66] Loading cluster: multinode-472894
	I1210 06:30:20.081216  178342 notify.go:221] Checking for updates...
	I1210 06:30:20.081514  178342 config.go:182] Loaded profile config "multinode-472894": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1210 06:30:20.081537  178342 status.go:174] checking status of multinode-472894 ...
	I1210 06:30:20.082345  178342 cli_runner.go:164] Run: docker container inspect multinode-472894 --format={{.State.Status}}
	I1210 06:30:20.100687  178342 status.go:371] multinode-472894 host status = "Stopped" (err=<nil>)
	I1210 06:30:20.100713  178342 status.go:384] host is not running, skipping remaining checks
	I1210 06:30:20.100721  178342 status.go:176] multinode-472894 status: &{Name:multinode-472894 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:30:20.100752  178342 status.go:174] checking status of multinode-472894-m02 ...
	I1210 06:30:20.101060  178342 cli_runner.go:164] Run: docker container inspect multinode-472894-m02 --format={{.State.Status}}
	I1210 06:30:20.124609  178342 status.go:371] multinode-472894-m02 host status = "Stopped" (err=<nil>)
	I1210 06:30:20.124635  178342 status.go:384] host is not running, skipping remaining checks
	I1210 06:30:20.124648  178342 status.go:176] multinode-472894-m02 status: &{Name:multinode-472894-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-472894 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-472894 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.892015115s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472894 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.60s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-472894
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-472894-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-472894-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.781975ms)

                                                
                                                
-- stdout --
	* [multinode-472894-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-472894-m02' is duplicated with machine name 'multinode-472894-m02' in profile 'multinode-472894'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-472894-m03 --driver=docker  --container-runtime=containerd
E1210 06:31:38.876232    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:31:44.571154    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-472894-m03 --driver=docker  --container-runtime=containerd: (39.518677948s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-472894
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-472894: exit status 80 (427.817665ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-472894 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-472894-m03 already exists in multinode-472894-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-472894-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-472894-m03: (2.076153478s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.17s)

                                                
                                    
x
+
TestPreload (121.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-618341 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-618341 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (57.365434129s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-618341 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-618341 image pull gcr.io/k8s-minikube/busybox: (2.201711586s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-618341
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-618341: (5.879669768s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-618341 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1210 06:33:20.087365    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:33:37.012089    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-618341 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (53.718559882s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-618341 image list
helpers_test.go:176: Cleaning up "test-preload-618341" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-618341
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-618341: (2.399405725s)
--- PASS: TestPreload (121.80s)

                                                
                                    
x
+
TestScheduledStopUnix (114.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-916458 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-916458 --memory=3072 --driver=docker  --container-runtime=containerd: (37.416662824s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-916458 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:34:36.336308  195444 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:34:36.336429  195444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:36.336438  195444 out.go:374] Setting ErrFile to fd 2...
	I1210 06:34:36.336443  195444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:36.336689  195444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:34:36.336937  195444 out.go:368] Setting JSON to false
	I1210 06:34:36.337071  195444 mustload.go:66] Loading cluster: scheduled-stop-916458
	I1210 06:34:36.337414  195444 config.go:182] Loaded profile config "scheduled-stop-916458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1210 06:34:36.337485  195444 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/config.json ...
	I1210 06:34:36.337663  195444 mustload.go:66] Loading cluster: scheduled-stop-916458
	I1210 06:34:36.337784  195444 config.go:182] Loaded profile config "scheduled-stop-916458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-916458 -n scheduled-stop-916458
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-916458 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:34:36.780394  195535 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:34:36.780566  195535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:36.780596  195535 out.go:374] Setting ErrFile to fd 2...
	I1210 06:34:36.780615  195535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:34:36.780897  195535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:34:36.781194  195535 out.go:368] Setting JSON to false
	I1210 06:34:36.781445  195535 daemonize_unix.go:73] killing process 195466 as it is an old scheduled stop
	I1210 06:34:36.783177  195535 mustload.go:66] Loading cluster: scheduled-stop-916458
	I1210 06:34:36.783614  195535 config.go:182] Loaded profile config "scheduled-stop-916458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1210 06:34:36.783688  195535 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/config.json ...
	I1210 06:34:36.783871  195535 mustload.go:66] Loading cluster: scheduled-stop-916458
	I1210 06:34:36.783978  195535 config.go:182] Loaded profile config "scheduled-stop-916458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1210 06:34:36.791320    4116 retry.go:31] will retry after 108.71µs: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.792491    4116 retry.go:31] will retry after 165.649µs: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.793640    4116 retry.go:31] will retry after 251.276µs: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.794781    4116 retry.go:31] will retry after 443.135µs: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.795911    4116 retry.go:31] will retry after 699.527µs: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.797455    4116 retry.go:31] will retry after 1.029606ms: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.798615    4116 retry.go:31] will retry after 582.949µs: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.799744    4116 retry.go:31] will retry after 1.340331ms: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.801214    4116 retry.go:31] will retry after 2.492438ms: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.804417    4116 retry.go:31] will retry after 2.067631ms: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.806624    4116 retry.go:31] will retry after 3.315796ms: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.811187    4116 retry.go:31] will retry after 7.803875ms: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.819430    4116 retry.go:31] will retry after 7.764562ms: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.827654    4116 retry.go:31] will retry after 17.761803ms: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.849481    4116 retry.go:31] will retry after 23.969593ms: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
I1210 06:34:36.873723    4116 retry.go:31] will retry after 63.978659ms: open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-916458 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-916458 -n scheduled-stop-916458
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-916458
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-916458 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:35:02.773647  196233 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:35:02.773867  196233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:35:02.773897  196233 out.go:374] Setting ErrFile to fd 2...
	I1210 06:35:02.773920  196233 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:35:02.774187  196233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:35:02.774476  196233 out.go:368] Setting JSON to false
	I1210 06:35:02.774612  196233 mustload.go:66] Loading cluster: scheduled-stop-916458
	I1210 06:35:02.775004  196233 config.go:182] Loaded profile config "scheduled-stop-916458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1210 06:35:02.775147  196233 profile.go:143] Saving config to /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/scheduled-stop-916458/config.json ...
	I1210 06:35:02.775367  196233 mustload.go:66] Loading cluster: scheduled-stop-916458
	I1210 06:35:02.775525  196233 config.go:182] Loaded profile config "scheduled-stop-916458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-916458
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-916458: exit status 7 (69.445069ms)

                                                
                                                
-- stdout --
	scheduled-stop-916458
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-916458 -n scheduled-stop-916458
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-916458 -n scheduled-stop-916458: exit status 7 (73.098382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-916458" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-916458
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-916458: (4.966050046s)
--- PASS: TestScheduledStopUnix (114.02s)

                                                
                                    
x
+
TestInsufficientStorage (8.96s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-645762 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-645762 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.359348338s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f16b9baa-55a3-4e73-80cd-b314b4a91360","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-645762] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6916f8b1-4959-4ab5-87d7-60e0c871b894","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22094"}}
	{"specversion":"1.0","id":"66c27be4-562d-4fb5-9372-bb46740ff86a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a424f80d-acd5-42fd-ae4f-3c8179b7d8ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig"}}
	{"specversion":"1.0","id":"466f6efa-d13d-4390-81c3-a12293bd3329","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube"}}
	{"specversion":"1.0","id":"eb7a61ec-8755-402e-91e1-7dcca2163997","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b895be17-4812-4102-944d-08a2a7284be2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8933b1a4-fdc8-4b6f-8d53-12c166da08d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"10c1003d-2c3c-4a7f-a523-118901d301d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fb6a0d71-a5fb-409d-9774-354b9b4657bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"30ac2d4c-f4c7-4e47-8b72-66d7118983b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7c1e4947-a04f-45dd-80c3-3fa07056bad5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-645762\" primary control-plane node in \"insufficient-storage-645762\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"79e23708-c8ed-4001-a9a9-131e7b50c5da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"920a03a3-cd65-46a7-ab93-772613370bca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"173e061f-5a7d-474e-a5c9-4c353758e2a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-645762 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-645762 --output=json --layout=cluster: exit status 7 (292.580061ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-645762","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-645762","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:35:59.525026  197990 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-645762" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-645762 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-645762 --output=json --layout=cluster: exit status 7 (304.528853ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-645762","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-645762","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:35:59.829359  198057 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-645762" does not appear in /home/jenkins/minikube-integration/22094-2307/kubeconfig
	E1210 06:35:59.839083  198057 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/insufficient-storage-645762/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-645762" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-645762
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-645762: (2.006895194s)
--- PASS: TestInsufficientStorage (8.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (62.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2670589495 start -p running-upgrade-919917 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2670589495 start -p running-upgrade-919917 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (31.72987829s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-919917 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-919917 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.851100879s)
helpers_test.go:176: Cleaning up "running-upgrade-919917" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-919917
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-919917: (2.5926789s)
--- PASS: TestRunningBinaryUpgrade (62.35s)

                                                
                                    
x
+
TestMissingContainerUpgrade (130.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.853242961 start -p missing-upgrade-894532 --memory=3072 --driver=docker  --container-runtime=containerd
E1210 06:36:38.876533    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:36:44.571603    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.853242961 start -p missing-upgrade-894532 --memory=3072 --driver=docker  --container-runtime=containerd: (1m5.430117336s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-894532
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-894532
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-894532 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-894532 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m0.822724783s)
helpers_test.go:176: Cleaning up "missing-upgrade-894532" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-894532
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-894532: (2.432229375s)
--- PASS: TestMissingContainerUpgrade (130.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-496157 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-496157 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (99.657612ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-496157] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (53.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-496157 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-496157 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (52.760139881s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-496157 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (53.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-496157 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-496157 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.583462115s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-496157 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-496157 status -o json: exit status 2 (312.226675ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-496157","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-496157
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-496157: (2.010205621s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-496157 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-496157 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.51095471s)
--- PASS: TestNoKubernetes/serial/Start (7.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22094-2307/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-496157 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-496157 "sudo systemctl is-active --quiet service kubelet": exit status 1 (341.100509ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-496157
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-496157: (2.305066396s)
--- PASS: TestNoKubernetes/serial/Stop (2.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-496157 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-496157 --driver=docker  --container-runtime=containerd: (7.295000841s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-496157 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-496157 "sudo systemctl is-active --quiet service kubelet": exit status 1 (360.477072ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (58.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3899490966 start -p stopped-upgrade-383661 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3899490966 start -p stopped-upgrade-383661 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (38.353511561s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3899490966 -p stopped-upgrade-383661 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3899490966 -p stopped-upgrade-383661 stop: (1.243438475s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-383661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-383661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (18.809010183s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (58.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-383661
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-383661: (1.645797454s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.65s)

                                                
                                    
x
+
TestPause/serial/Start (59.59s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-094036 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-094036 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (59.590784215s)
--- PASS: TestPause/serial/Start (59.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-094036 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-094036 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.523982344s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.54s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-094036 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-094036 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-094036 --output=json --layout=cluster: exit status 2 (329.381967ms)

                                                
                                                
-- stdout --
	{"Name":"pause-094036","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-094036","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-094036 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-094036 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-094036 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-094036 --alsologtostderr -v=5: (3.062137546s)
--- PASS: TestPause/serial/DeletePaused (3.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-094036
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-094036: exit status 1 (18.175238ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-094036: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-225109 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-225109 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (191.091946ms)

                                                
                                                
-- stdout --
	* [false-225109] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:42:19.544094  241880 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:42:19.544235  241880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:42:19.544247  241880 out.go:374] Setting ErrFile to fd 2...
	I1210 06:42:19.544253  241880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:42:19.544548  241880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22094-2307/.minikube/bin
	I1210 06:42:19.545006  241880 out.go:368] Setting JSON to false
	I1210 06:42:19.545888  241880 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5090,"bootTime":1765343850,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 06:42:19.545958  241880 start.go:143] virtualization:  
	I1210 06:42:19.549875  241880 out.go:179] * [false-225109] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 06:42:19.553679  241880 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:42:19.553773  241880 notify.go:221] Checking for updates...
	I1210 06:42:19.559995  241880 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:42:19.562852  241880 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22094-2307/kubeconfig
	I1210 06:42:19.566166  241880 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22094-2307/.minikube
	I1210 06:42:19.569057  241880 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 06:42:19.571961  241880 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:42:19.575416  241880 config.go:182] Loaded profile config "kubernetes-upgrade-712093": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1210 06:42:19.575523  241880 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:42:19.604944  241880 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 06:42:19.605050  241880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:42:19.662001  241880 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 06:42:19.650850613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 06:42:19.662097  241880 docker.go:319] overlay module found
	I1210 06:42:19.665212  241880 out.go:179] * Using the docker driver based on user configuration
	I1210 06:42:19.668131  241880 start.go:309] selected driver: docker
	I1210 06:42:19.668150  241880 start.go:927] validating driver "docker" against <nil>
	I1210 06:42:19.668163  241880 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:42:19.672524  241880 out.go:203] 
	W1210 06:42:19.676629  241880 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1210 06:42:19.679540  241880 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-225109 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-225109" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-225109" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:38:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-712093
contexts:
- context:
cluster: kubernetes-upgrade-712093
user: kubernetes-upgrade-712093
name: kubernetes-upgrade-712093
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-712093
user:
client-certificate: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/client.crt
client-key: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-225109

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-225109"

                                                
                                                
----------------------- debugLogs end: false-225109 [took: 3.424793282s] --------------------------------
helpers_test.go:176: Cleaning up "false-225109" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-225109
--- PASS: TestNetworkPlugins/group/false (3.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-806899 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1210 06:48:37.013449    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-806899 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m1.462908432s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-806899 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a457bfad-87ec-452c-8250-bf728a05c722] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a457bfad-87ec-452c-8250-bf728a05c722] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003887008s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-806899 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-806899 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-806899 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.031634917s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-806899 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-806899 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-806899 --alsologtostderr -v=3: (12.090090315s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-806899 -n old-k8s-version-806899
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-806899 -n old-k8s-version-806899: exit status 7 (70.545279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-806899 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-806899 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-806899 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (50.077808018s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-806899 -n old-k8s-version-806899
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-66m9m" [755f71c1-1dc0-45de-a8a9-4db0f924e9b1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003620151s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-66m9m" [755f71c1-1dc0-45de-a8a9-4db0f924e9b1] Running
E1210 06:50:00.088741    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003949269s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-806899 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-806899 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-806899 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-806899 -n old-k8s-version-806899
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-806899 -n old-k8s-version-806899: exit status 2 (357.428508ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-806899 -n old-k8s-version-806899
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-806899 -n old-k8s-version-806899: exit status 2 (316.495584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-806899 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-806899 -n old-k8s-version-806899
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-806899 -n old-k8s-version-806899
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
E1210 06:51:38.876136    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:51:44.571633    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (57.230811017s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-451123 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6e713278-882c-4ceb-9943-b2100a15009c] Pending
helpers_test.go:353: "busybox" [6e713278-882c-4ceb-9943-b2100a15009c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6e713278-882c-4ceb-9943-b2100a15009c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003781394s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-451123 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-451123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-451123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.461457882s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-451123 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-451123 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-451123 --alsologtostderr -v=3: (12.089207412s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-451123 -n embed-certs-451123
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-451123 -n embed-certs-451123: exit status 7 (68.376071ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-451123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-451123 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (48.910347553s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-451123 -n embed-certs-451123
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-lvhld" [5695e884-bb2c-4324-bbc8-64289a240bda] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003318967s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-lvhld" [5695e884-bb2c-4324-bbc8-64289a240bda] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003524366s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-451123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-451123 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
I1210 06:53:14.631155    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
I1210 06:53:14.777346    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
I1210 06:53:14.932960    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-451123 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-451123 -n embed-certs-451123
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-451123 -n embed-certs-451123: exit status 2 (325.167318ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-451123 -n embed-certs-451123
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-451123 -n embed-certs-451123: exit status 2 (333.416991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-451123 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-451123 -n embed-certs-451123
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-451123 -n embed-certs-451123
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
E1210 06:53:37.013130    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-944360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:40.888613    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:40.894953    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:40.907353    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:40.928885    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:40.970138    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:41.051687    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:41.213830    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:41.535135    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:42.177419    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:43.459176    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:46.021221    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:53:51.142972    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:54:01.384956    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (58.135696223s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-395269 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c418ba9e-1984-4972-b9db-0ddb5beaa541] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1210 06:54:21.866718    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [c418ba9e-1984-4972-b9db-0ddb5beaa541] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.008080725s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-395269 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-395269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.001769479s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-395269 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-395269 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-395269 --alsologtostderr -v=3: (12.078027386s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-395269 -n default-k8s-diff-port-395269
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-395269 -n default-k8s-diff-port-395269: exit status 7 (70.463531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-395269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
E1210 06:55:02.827927    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/old-k8s-version-806899/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-395269 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (55.525764388s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-395269 -n default-k8s-diff-port-395269
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-lvcsw" [9236a7a6-eed1-47c8-b1c1-b48616f69e91] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003183153s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-lvcsw" [9236a7a6-eed1-47c8-b1c1-b48616f69e91] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002926397s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-395269 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-395269 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
I1210 06:55:48.454802    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
I1210 06:55:48.624617    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
I1210 06:55:48.777721    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubeadm.sha256
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-395269 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-395269 -n default-k8s-diff-port-395269
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-395269 -n default-k8s-diff-port-395269: exit status 2 (359.850352ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-395269 -n default-k8s-diff-port-395269
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-395269 -n default-k8s-diff-port-395269: exit status 2 (332.238347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-395269 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-395269 -n default-k8s-diff-port-395269
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-395269 -n default-k8s-diff-port-395269
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-320236 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-320236 --alsologtostderr -v=3: (1.32641896s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-320236 -n no-preload-320236: exit status 7 (69.445777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-320236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-168808 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-168808 --alsologtostderr -v=3: (1.313543158s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-168808 -n newest-cni-168808: exit status 7 (71.440648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-168808 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-168808 image list --format=json
I1210 07:12:16.709047    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
I1210 07:12:16.888795    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
I1210 07:12:17.047666    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (59.366146136s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-225109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-225109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tcqfc" [181e8195-aa15-492a-bb0d-d0e8d7f945bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tcqfc" [181e8195-aa15-492a-bb0d-d0e8d7f945bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003796055s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-225109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (59.944536599s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-kl65l" [001f05b7-2917-461e-9722-e5aeb854850a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003988632s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-225109 "pgrep -a kubelet"
I1210 07:15:06.066810    4116 config.go:182] Loaded profile config "kindnet-225109": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-225109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7bvgr" [7c2e1ad1-de51-4f9c-8864-18ecdf4c5023] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7bvgr" [7c2e1ad1-de51-4f9c-8864-18ecdf4c5023] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003588874s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-225109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m2.960175484s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-p9dp8" [02f332bd-2ddb-4a59-8ccc-9c57ef25bd09] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004175856s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-225109 "pgrep -a kubelet"
I1210 07:16:46.724721    4116 config.go:182] Loaded profile config "flannel-225109": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-225109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-hbcq6" [b54e368f-49b7-4b39-80ac-005d023d1561] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-hbcq6" [b54e368f-49b7-4b39-80ac-005d023d1561] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004135508s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-225109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m19.971480378s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-225109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-225109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-llk4w" [31befede-930f-45ca-970b-7255578dc4a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-llk4w" [31befede-930f-45ca-970b-7255578dc4a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003726571s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-225109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m27.310063797s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1210 07:20:20.244253    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m21.579328529s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-225109 "pgrep -a kubelet"
I1210 07:20:38.624056    4116 config.go:182] Loaded profile config "bridge-225109": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-225109 replace --force -f testdata/netcat-deployment.yaml
I1210 07:20:39.057680    4116 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vjqg6" [a73930b9-e6ad-40c3-a7a5-79a7f511ed1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:20:40.726503    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-vjqg6" [a73930b9-e6ad-40c3-a7a5-79a7f511ed1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.0038899s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-225109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1210 07:21:21.688158    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kindnet-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-225109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m9.447151334s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-dn6wd" [6255a75c-2418-4fe4-ae81-0e88c320e1d9] Running
E1210 07:21:38.876817    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/functional-644034/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:40.409523    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:40.415862    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:40.427188    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:40.448553    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:40.489851    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:40.571227    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:40.732678    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:41.053940    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:41.695684    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005191914s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-225109 "pgrep -a kubelet"
E1210 07:21:42.184594    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:42.191243    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:42.202582    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:42.224611    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:42.266020    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1210 07:21:42.274470    4116 config.go:182] Loaded profile config "calico-225109": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-225109 replace --force -f testdata/netcat-deployment.yaml
E1210 07:21:42.348347    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:42.511094    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1210 07:21:42.640315    4116 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-szhfz" [81f37b38-d846-4439-b3c5-c5ec059e14ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:21:42.832821    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:42.977117    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:43.474875    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:44.571503    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/addons-173024/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:44.756714    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:45.538537    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:47.318578    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-szhfz" [81f37b38-d846-4439-b3c5-c5ec059e14ce] Running
E1210 07:21:50.660272    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/flannel-225109/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:52.440545    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/no-preload-320236/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003163211s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-225109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-225109 "pgrep -a kubelet"
I1210 07:22:26.000351    4116 config.go:182] Loaded profile config "custom-flannel-225109": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-225109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7wjkg" [214eb60a-fdb0-43d1-9375-141658be566e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7wjkg" [214eb60a-fdb0-43d1-9375-141658be566e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003621282s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-225109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-225109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    

Test skip (35/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
13 TestDownloadOnly/v1.34.3/preload-exists 0.25
16 TestDownloadOnly/v1.34.3/kubectl 0
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0.06
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.15
392 TestNetworkPlugins/group/kubenet 3.59
400 TestNetworkPlugins/group/cilium 4.04
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1210 05:29:11.422501    4116 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
W1210 05:29:11.522242    4116 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 status code: 404
W1210 05:29:11.673363    4116 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.34.3/preload-exists (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1210 05:29:15.467919    4116 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
W1210 05:29:15.514623    4116 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
W1210 05:29:15.526859    4116 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-595993" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-595993
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-225109 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-225109" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-225109" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:38:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-712093
contexts:
- context:
cluster: kubernetes-upgrade-712093
user: kubernetes-upgrade-712093
name: kubernetes-upgrade-712093
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-712093
user:
client-certificate: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/client.crt
client-key: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-225109

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-225109"

                                                
                                                
----------------------- debugLogs end: kubenet-225109 [took: 3.430527109s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-225109" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-225109
--- SKIP: TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-225109 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-225109" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22094-2307/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:38:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-712093
contexts:
- context:
cluster: kubernetes-upgrade-712093
user: kubernetes-upgrade-712093
name: kubernetes-upgrade-712093
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-712093
user:
client-certificate: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/client.crt
client-key: /home/jenkins/minikube-integration/22094-2307/.minikube/profiles/kubernetes-upgrade-712093/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-225109

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-225109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225109"

                                                
                                                
----------------------- debugLogs end: cilium-225109 [took: 3.87564789s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-225109" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-225109
--- SKIP: TestNetworkPlugins/group/cilium (4.04s)

                                                
                                    
Copied to clipboard